ArticlePDF Available

Abstract and Figures

In underwater environments, the scattering and absorption phenomena affect the propagation of light, degrading the quality of captured images. In this work, the authors present a method based on a physical model of light propagation that takes into account the most significant effects to image degradation: absorption, scattering, and backscattering. The proposed method uses statistical priors to restore the visual quality of the images acquired in typical underwater scenarios.
Content may be subject to copyright.
Underwater Depth Estimation and Image Restoration Based on Single Images
Paulo Drews-Jr, Erickson R. Nascimento, Silvia Botelho and Mario Campos
Images acquired in underwater environments undergo a degradation process due to the inherent
complexity of the interaction of light with the medium. Such interaction includes numerous phenomena
such as multipath refraction and reflection of light rays on particles in suspension with dynamic motion
patterns. The complexity of a possibly complete model may render it unfeasible to be used in several
applications which may requires frame rate performance. Thus, we have adopted a simplified model that
takes into account the most significant effects to image degradation, i.e. absorption, scattering and
backscattering. In this work, we present a restoration method based on a physical model of light
propagation along the use of statistical priors of the scene. Our approach is able to simultaneously recover
the medium transmission and the scene depth as well as to restore the visual quality of the images
acquired in typical underwater scenarios.
An increasing number of real-world applications are related to underwater environments, among which
are fisheries, environmental and structural monitoring and inspections, and oil and gas exploration.
Petroleum and natural gas are still the most important sources of energy in the world and researchers
have recently discovered relevant oil and gas reserves along the coast of Brazil and Africa underneath
what is known as pre-salt rock formations. Pre-salt layers of rocks in the earth's crust are composed only
of petrified salt covering large areas on the ocean floor. Recent findings have unveiled that over millions
of years large amounts of organic matter have been deposited beneath the layers of pressed salt between
the west coast of Africa and the eastern shores of South America. This organic matter has been
transformed into oil, which in many areas is engulfed with gas.
In Brazil, the pre-salt area spans a range of about 800 kilometers along the Brazilian coast. Geological
studies have estimated that the oil and gas reserves in that area are in the order of 80 billion barrels,
which would place Brazil as the sixth largest holder of reserves in the world behind Saudi Arabia, Iran,
Iraq, Kuwait and the United Arab Emirates.
Exploring and working in the pre-salt reserves present an important technological issue that includes the
ability to perceive the underwater environment. Techniques based on machine vision can help humans to
monitor and to supervise activities in these scenarios, as well as to enable carrying out missions with
autonomous robotic vehicles. In general, computer vision algorithms assume that the medium does not
affect light propagation. However, this assumption does not hold in scattering media such as underwater
scenes. Indeed, the phenomena of scattering and absorption affect the propagation of light, degrading
the quality of the captured images.
Thus, the effort in the fields of image processing and computer vision for leading to an improvement in
the quality of underwater images may contribute to several applications, especially those related to
offshore oil and gas industry. In this paper, we address the problems of image restoration, to improve the
visual quality of underwater images, and depth estimation of the scene to extract geometrical information
of the objects in therein.
Image restoration and depth estimation are ambiguous problems, since in general the available number
of constraints is smaller than the number of unknown variables. One of the strategies most commonly
adopted to tackle these problems in computer vision is to impose extra constraints that are based on
some a priori knowledge about the scene. These extra constraints are called priors. In general, a prior can
be a statistical/physical property, ad-hoc rules, or even heuristic assumptions. The performance of the
algorithms is limited by the extent to which the prior is valid. Some of the widely used priors in image
processing are smoothness, sparsity, and symmetry.
Inspired by the observation of Kaiming He and colleagues [1] that natural scenes tend to be dark in at least
one of the RGB color channels, we derived a new prior by making observations in underwater images on
the relevance of the absorption rate in the red color channel. By collecting a large number of images from
several image search engines, we tested our prior and show its applicability and limitations on images
acquired from real scenes.
The main contribution of the work described in this paper is an extension of our previous work to deal
with underwater image restoration called Underwater Dark Channel Prior (UDCP) [2]. We present a
deeper study on the method, including an extensive statistical experimental verification of the assumption
following the guidelines described in He and colleagues [1]. Additionally, we present a new application of
the UDCP prior for underwater image restoration and depth estimation. We evaluated the algorithm using
qualitative and quantitative analysis through a new set of data, including images acquired in the Brazilian
coast. The techniques presented in this work open new opportunities to develop automatic algorithms
for underwater applications that require high quality in visual information.
Previous Approaches in Image Restoration of Underwater Images
The works in literature have approached the problem of restoring images acquired from underwater
scenes from several perspectives: using specific purpose hardware, stereo images and polarization filters
[3]. Despite the improvements achieved by these approaches, they still present several limitations. For
instance, methods that rely on specialized hardware are expensive and complex to be realized. The use of
polarizers, for example, requires moving parts and it is hard to implement in automatic acquisition tasks.
In a stereo vision system approach, the correspondence problem becomes even harder due to the strong
effects imposed by the medium. Methods based on multiple images require at least two images of the
same scene taken under different environment conditions, which makes inadequate for real-time
applications. Thus, the problem of image restoration for underwater scenes still demands much research
effort in spite of the advances that have already been attained.
In the past few years, a large number of algorithms for image restoration based on single image have been
proposed, and the works of He and colleagues [1] and of Raanan Fattal [4] the most cited in the field.
While these works have shown good performance for enhancement in the visual quality for outdoor
terrestrial images, there is still room for improvement when they are applied to underwater images. As
far as single image methods are concerned, He and colleagues have proposed one of the most popular
methods called Dark Channel Prior (DCP). Liu Chao and Meng Wang [5], John Chiang and Ying-Ching Chen
[6], and Seiichi Serikawaa and Huimin Lu [7] have also applied the DCP method to restore the visual quality
of underwater images. However, these works do not address some of the fundamental DCP limitations
related with the absorption rate of the red channel, and do not discuss relevant issues with the basic DCP
Differently than outdoor scenes, the underwater medium imposes wavelength dependent rates of
absorption, mainly in the red channel. Thus, Paulo Drews-Jr and colleagues [2] proposed a modified
version of the DCP to overcome this limitation of DCP prior for applications in underwater imaging. Here
we build upon and extend the work presented in [2], providing an extensive study about the prior with
applications to image restoration and depth estimation. Furthermore, we provide new results of image
restoration using qualitative and quantitative analysis.
Underwater Attenuation Light Modelling
The underwater image formation results from a complex interaction between the light, the medium, and
the scene. A simplified analysis of this interaction is possible, yet maintaining physical plausibility. To this
end, first order effects are the forward scattering and the backscattering, i.e. the scattering of light rays
in small and large angles. The absorption of light is associated with these two effects since they respond
for contrast degradation and color shift in images. Fig. 1 illustrates these effects.
According to Yoav Schechner and Nir Karpel [3], backscattering is the prime reason for image contrast
degradation, thus the forward scattering can be neglected. Therefore, the underwater attenuation light
model is a linear combination of the direct light and the backscattering. The direct light is defined as the
fraction of light irradiated by the scene where a part is lost due to scattering and absorption. On the other
hand, the backscattering does not originate from the object's radiance, but it results from the interaction
between the environment’s illumination sources and the particles dispersed in the medium. For a
homogeneous illuminated environment, the backscattered light can be assumed to be constant and it can
be obtained from the image by using a completely haze-opaque region or by finding the farthest pixel in
the scene. However, this information is impossible to be automatically acquired from a single image.
Finding the brightest pixel in the dark channel is assumed as an adequate approximation.
Fig. 1 Diagram illustrating the underwater attenuation light model. The dashed lines show the forward
scattering and backscattering effects. The scattering of light rays in small and large angles creates these
effects, respectively. Direct light is the portion of light irradiated by the scene that reaches the image
One important aspect of the linear model is the weight of the direct and of the backscattering components
in the final image. The experimental analysis indicates an exponential behavior between the depth and
the attenuation coefficient. This coefficient is an inherent property of the medium and it is defined as the
sum of the absorption and scattering rates. Since both rates are wavelength dependent, the attenuation
coefficient is different for each wavelength. In the literature exponential weight is called medium
transmission. The depths in the scene are estimated up to a scale factor by applying the log operation to
the medium transmission value.
The image restoration is performed by inverting the underwater attenuation light model. Assuming that
we are able to estimate the medium transmission and the backscattering light, the restored image is
computed by summing the backscattering light intensity and dividing it by the normalized color image.
Dark Channel Prior
The Dark Channel Prior is a statistical prior based on the observation that natural outdoor images in clear
day exhibits mostly dark intensities in a square patch of the image [1]. This was inspired in the well-known
dark-object subtraction method from the remote sensing field. The authors considered that in most of
the non-sky patches on images of outdoor scenes, at least one color channel in the RGB representation
would have some pixels whose intensity were almost zero. This low intensity in the dark channel was due
to three factors: a) shadows in the images; b) colorful objects or surfaces where at least one color has low
intensity and c) dark objects or surfaces. They collected a large number of outdoor images and built
histograms, and with those, they have shown that about 75 percent of the pixels in the dark channel had
zero values, and the intensity of 90 percent of the pixels was below 25 in a scale of [0;255]. Those results
provide a strong support to the dark channel prior assumption for outdoor images. This prior allows the
estimation of an approximation of the amount of the medium transmission in local patches. He and
colleagues have shown that the Dark Channel provided excellent results in haze scenes.
The use of a local patch affects the performance of the medium transmission estimation. He and
colleagues proposed the use of a spectral matting method to refine the estimated transmission. Their
method presents good results but it requires a high computational effort to process the Laplacian matrix.
Other works proposed approximate solutions to make it faster by using quadtrees, Markov Random Fields,
or filtering techniques, e.g. guided filter or bilateral filter.
Dark Channel Prior on Underwater Images and its variations
Due to the good results obtained by the DCP method for haze scenes and the similarities in the modelling
of a haze image and an underwater image, some previous works applied the Dark Channel Prior to process
underwater images. One of the first works to use DCP in underwater images was Chao and Wang [5]. The
reported results show a limited number of experiments where the visual quality of the results do not
present a significant improvement, even for images with small degradation. Chiang and Chen [6] also
proposed an underwater image restoration method using standard DCP. Their method obtained good
results for real underwater images, but it was limited by the standard DCP method in underwater images
and by the assumption that the image is predominantly blue. Recently, Serikawaa and Lu [7] proposed a
variation of the DCP that filters the medium transmission by using Joint Trilateral Filter. Despite the
improvement attained in the image restoration when compared to standard DCP, the limitation related
to the red channel remains the same.
Kristofor Gibson and colleagues [8] proposed a variation of the DCP where they replaced the minimum
operator in an image patch by the median operator. They named the method as MDCP. They chose the
median operator due to its ability to preserve edges. Their approach could provide good estimation when
the effects of the medium are approximately wavelength independent; in this case, the behavior tends to
be similar to standard DCP.
Nicholas Carlevaris-Bianco and colleagues [9] proposed an underwater image restoration using a new
interpretation of the DCP for underwater conditions. The proposed prior explores the fact that the
attenuation of light in water varies depending on the color of the light. Underwater medium attenuates
the red color channel at a much higher rate than the green and blue channels. Differently from the
standard DCP, that prior is based on the difference between the maximum in the red channel and each
one of the other channels (G and B), instead of with only the minimum as in DCP. The method works well
when the absorption coefficient of the red channel is large. The method shows some shortcomings to
estimate the medium transmission in typical shallow waters.
Underwater Dark Channel Prior and the Image Restoration
The statistical correlation of a low Dark Channel in haze-free images is not easy to be tested for
underwater images due to the difficulty to obtain real images of underwater scene in an out-of-water
condition. However, the assumptions made by He and colleagues are still plausible, i.e. at least one color
channel has some pixels whose intensity are close to zero. These low intensities are due to a) shadows; b)
color objects or surfaces having at least one color channel with low intensity, e.g. fishes, algae or corals;
c) dark objects or surfaces, e.g. rocks or dark sediment.
Despite the fact that dark channel assumption seems to be correct, some problems arise from the
wavelength independence assumption. There are many practical situations where the red channel is
nearly dark, which corrupts the transmission estimate by the standard DCP. Indeed, the red channel
suffers an aggressive decay caused by the absorption of the medium making it to be approximately zero
even in shallow waters. Thus, the information of the red channel is undependable.
We proposed a new prior that considers just the green and the blue color channels to overcome this issue.
We named this prior Underwater Dark Channel Prior (UDCP). This prior allows us to invert the model and
to obtain an estimate of the medium transmission. The medium transmission and the backscattering light
constants provide enough information to restore the images.
We performed an experimental verification to evaluate the assumption of the new prior based on two
assumptions: a) the main assumption of the DCP for outdoor scenes remains valid if only applied to green
and blue channels, and b) the behavior of the UDCP histogram in underwater scenes is plausible.
Since He's dataset has not be made publicly available, we created our own following the guidelines
proposed by He and colleagues in [1]. The dataset is composed of 1,022 outdoor landscape images greater
than 0.2 Mpixels from the SUN database [10], see Fig. 2 for image samples. We selected a subset of images
of natural scenes, i.e. images without any human-made object, comprising of 274 images (first row in Fig.
We then compute the distribution of pixel intensities, where each bin contains 16 intensity levels from an
interval of [0;255] (Fig. 3). The histograms were obtained by using i. only the natural images and ii. all
images of the extended dataset. In this figure, each row depicts the results for the minimum operator in
a small patch window using only the RED, GREEN and BLUE channels, the DCP (dark channel in all channels)
and the UDCP (dark channel in green and blue).
Fig. 2 - Sample images of our dataset. The first row shows the images of natural scenes and the second
row shows scenes that include human-made structures/objects (Images acquired from SUN Dataset
Even thou our datasets and those of He and colleagues are different, ours were collected using the same
guidelines as theirs, and thus, some similarity is to be expected. Indeed, the histograms present
similarities, but also important differences. The probability of the first bin (intensities between 0-15) is
smaller than the one presented by He and colleagues [1]. They reported ≈ 90% for the first bin in the DCP
while the probability for our dataset is 45% (Fig. 3). One can see the highest probability, 50%, is
obtained for histograms of natural scenes (Fig. 3 - 1st and 3rd rows) which is the expected case of typical
underwater scenes. The most important observation is related to the significance of each channel for the
prior. The lower intensity bins of the blue channel (Fig. 3) are dominant mainly in natural scenes. The red
channel is still dark but it is the most equalized histogram for all scenarios. The green channel presents
similar behavior. Thus, the absence of blue color in the final composition of the scene represents the
prevalence of this channel in both the DCP and the UDCP.
One can observe the close similarities between DCP and UDCP statistics in the histogram in Fig. 3. We
show the Pearson's linear correlation coefficient in Table I, which quantifies these similarities. The
correlation coefficient ranges from [-1;1], where a coefficient value close to one indicates that the
relationship is almost perfect and negative values show that data are uncorrelated.
Fig. 3 - The distribution of pixel intensity of the dark channel for natural scenes of the extended
dataset (1st and 3rd rows) and for all images of the extended dataset (2nd and 4th rows). We show the
histogram for the red, green, blue channels, DCP (in black color), and UDCP (in cyan color),
Table I Person’s Correlation coefficient between DCP and UDCP, Red, Green and Blue channels.
Natural Scenes
Extended Dataset
One can readily see that there is a strong correlation between DCP and UDCP, which means that both
methods are based on similar assumptions about the scene, i.e. low intensity dark channel. The value of
the correlation coefficients for the blue channel and DCP are approximately equal to one meaning that
they are also strongly correlated. In natural scenes, the correlation between DCP and green channel is the
smallest due to the presence of grass and trees in the scenes, which causes an increase in the intensities
of this color channel.
Fig. 4 Sample images from the underwater datasets. Images from the reduced dataset are shown in
the top row and images from the extended dataset is shown in the bottom row (First Row Courtesy
of Rémi Forget, Second RowCourtesy of Kristina Maze).
Fig. 5 - The distribution of pixel intensity of the dark channel for the reduced dataset (1st and 3rd rows)
and for the extended dataset (2nd and 4th rows). We show the histogram for the red, green, blue
channels, DCP (in black color), and UDCP (in cyan color), respectively.
We also created two datasets of underwater images to evaluate the influence of the medium and to verify
the UDCP assumptions. The datasets creation follows the guidelines of He and colleagues [1]. The first
dataset (reduced) was created by extracting the images from a single user of the Flickr website. This
dataset contains 65 high quality photos acquired with the same camera. The images, which include coral
reefs, rocks, marine animals, wreck, etc., were acquired during diving activities in several places of the
world (thus with different turbidity levels). The first row of Fig. 4 shows sample images of the reduced
The second dataset (extended) was obtained by collecting images from several image search engines on
the internet. This dataset is composed of 171 underwater images acquired under diverse media
conditions, water depth and scenes, which provides a rich source of information. All the images are
approximately homogeneous illuminated that limits the water depth to shallow water. The second row of
the Fig. 4 shows sample images of the extended dataset.
Differently from the histograms of outdoor scenes, the dark channel of the red channel is really dark, i.e.
≈ 90% of the pixels are in the first bin (Fig. 5). This agrees with the assumption of UDCP, i.e. the highest
absorption rate for the red channel. As expected, the dark channel for the blue and the green channels
are similar but many values cover a broader range due to the effects of the interaction of light with the
medium. The histograms of DCP are somewhat consistent with what we would expect for non-
participating media. Hence, DCP is not able to recover adequately the medium transmission. However,
the bin values in the UDCP histograms are more evenly distributed, which indicates that UDCP is a better
approach to estimate the medium transmission.
Experimental verification shows that the statistics for the UDCP assumption is a more general supposition
than the DCP assumption. However, these results do not guarantee the quality of the estimated
transmission. UDCP and DCP obtain similar histograms for natural scenes, as shown by the correlation
analysis. These results indicate that both are based on similar assumptions.
Another important characteristic concerns the blue channel, which in natural scenes tends to be darker
than the other channels. The underwater medium is typically blue, thus increasing the intensities of this
color channel. This fact corroborates the underwater dark channel assumption.
Experimental Results
If from the one hand the experiments showed that the assumptions of UDCP are valid, on the other hand
it is important to find out if the UDCP outperforms the other DCP based methods for restoring images. In
order to evaluate the performance of UDCP, we applied the standard DCP to underwater images, as
proposed by Chao and Wang [5], and Chiang and Chen [6]. The MDCP [8] was also applied to underwater
images, but with the refinement proposed by He and colleagues [1]. We also obtained results for Bianco's
prior (BP) [9]. Our evaluation is based on qualitative and quantitative analysis. Figs. 6 and 7 show the
qualitative results for underwater images collected from the internet. Fig. 8 shows the sample of images
from three underwater videos that we captured. We acquired these videos in a coral reef at the Brazilian
Northeast Coastal area at the depth of approximately 10m. They are composed of 150, 138 and 610
frames, and the sample images of these videos are figs. 8(a), 8(b) and 8(c), respectively.
Fig. 6 Restored images and depth estimation: (a) Underwater image with regions where the
backscattering constant was estimated, using UDCP (orange patch), DCP (red patch), MDCP (yellow
patch), and the BP (purple patch). Restored images using UDCP (b), DCP (d), MDCP (e), and BP (f).
Colorized depth maps obtained using UDCP (c), DCP (g), MDCP (h), and BP (i) (The credits of the image
(a): Kevin Clarke).
In the quantitative evaluation, we used a metric called proposed by Nicolas Hautière and colleagues [11]
to analyze their method for weather-degraded images. We adopted this metric in the present work due
to the similarities of weather-degraded image and underwater images. Three different indexes are
defined in the metric: e, 𝑟 ̅and s. The value of e evaluates the ability of a method to restore edges, which
were not visible in the degraded image, but are visible in the restored image. The value of 𝑟 ̅ measures
the quality of the contrast restoration; a similar technique was adopted by [3] to evaluate restoration in
One example of these experiments is depicted in Fig. 6, which shows the original image, Fig. 6(a), the
restored image, Fig. 6(b), and the colorized depth maps, Fig. 6(c), obtained using UDCP approach. We
colorized the depth maps to aid the visualization, where reddish colors represent closer points and bluish
colors, represent points that are further underwater medium. Finally, the value of s is obtained
from the amount of pixels which are saturated (black or white) after applying the restoration method but
were not before. These three indexes allow us to estimate an empirical restoration score
= e + 𝑟 ̅+ 1 − s
[11], where larger values mean better restoration. Table II shows the obtained results.
Fig. 7 A second example of restored images and depth estimation: (a) Underwater image with
regions where the backscattering constant was estimated, using UDCP (orange patch), DCP (red
patch), MDCP (yellow patch), and the BP (purple patch). Restored images using UDCP (b), DCP (d),
MDCP (e), and BP (f). Colorized depth maps obtained using UDCP (c), DCP (g), MDCP (h), and BP (i).
(The credits of the image (a): Ancuti and colleagues [12]).
One example of these experiments is depicted in Fig. 6, which shows the original image, Fig. 6(a), the
restored image, Fig. 6(b), and the colorized depth maps, Fig. 6(c), obtained using UDCP approach. We
colorized the depth maps to aid the visualization, where reddish colors represent closer points and bluish
colors, represent points that are further away.
Figs. 6 and 7 also show the results obtained by applying the methods proposed by other authors, (i.e.
DCP, MDCP, and BP) on images from the extended dataset. This dataset is detailed in Section Underwater
Dark Channel Prior and the Image Restoration. We show the underwater images with the back scattering
light estimation in figs. 6(a) and 7(a). The estimation of the backscattering constant obtained by UDCP
seems to be the most plausible, i.e. near the farthest point in the image. The other methods fail in the
estimation in at least one of the images. They identify the backscattering light in bright surfaces of the
scene instead of the farthest point.
Table II Quantitative evaluation of the underwater restoration methods using the
metric [11]. The
best method is highlighted using bold letters. We show the results for the sample images in Figs. 6, 7
and 8 and the average of the extended dataset and the videos of Fig. 8.
Fig. 6(a)
Fig. 7(a)
Average of the
Extended Dataset
Fig. 8(a)
Average of the Video 1
Fig. 8(b)
Average of the Video 2
Fig. 8(c)
Average of the Video 3
The restored images by UDCP, figs. 6(b) and 7(b), show that there was an improvement as far as contrast
and color fidelity are concerned. The values in Table II show that the restoration using UDCP presented
the best values for metric for all experiments, including full dataset. In Fig. 6, the UDCP (b) and BP (f)
presented the best results for contrast and visibility. However, BP fails to estimate the backscattering
constant. It generates an incorrect depth information shown by the colorization of the restored image.
This is corroborated by the fact that the depth maps estimated by both methods are similar. The
improvement in the estimation of the ocean floor of the scene is noticed for the image restored using
UDCP. The improvement provided by DCP and MDCP is imperceptible because both methods are not able
to recover the depth map in a correct way.
The UDCP method obtained the best results in Fig. 7, while the BP, Fig. 7(f), presented the worst results.
The values in Table II also confirm this fact. This can be explained since BP underestimates the attenuation
coefficient, limiting the quality of the map. This is due to the behavior of the red channel, which is not
completely absorbed. The results obtained by standard DCP, Fig. 7(d), and MDCP, Fig. 7(e), are also related
to this fact, since both methods are able to provide good results for depth map and restoration.
Fig. 8 shows the results obtained by applying the methods to the videos that we have captured. We
depicted one sample image from each video, shown in figs. 8(a), 8(b) and 8(c). For these sample images,
the backscattering constant is well balanced in all wavelengths due to the characteristics of the water and
the small water depth. In this case, the standard DCP, MDCP and UDCP present similar results in
qualitative terms. The BP method fails to estimate the depth map, in a similar way as the one shown in
Fig. 7. Thus, we omit the results using BP because they are similar to those obtained by the underwater
camera, i.e. first row. The UDCP attained the best results for scenes located at greater depths, evidenced
by the visibility of the rock in the top left of the restored image in Fig. 8(i).
Table II shows the average values for the videos illustrated by sample images in Fig. 8. We can clearly see
that our method presents better results using the metric, especially due to its ability to improve edges.
The average of the extended dataset is also shown, and the results are still favorable to UDCP. One can
see that the metric presents large values for video associated to Fig. 8(a). This is because the number of
edges in the underwater image is small, assuming the parameters adopted by Hautière and colleagues
[11]. The increase provided by the restoration is large, producing large values of the e metric and, by
consequence, the metric.
Although the standard DCP is intuitive, it has shown limitations to its use in underwater conditions
because of the high absorption of the red channel. The BP method presented good results in specific
contexts, but it underestimated medium transmission. The MDCP presents similar results to the DCP.
Finally, UDCP presented the most significant results in underwater conditions. It provides good restoration
and depth estimation even in situations where other methods can fail.
Despite the fact that UDCP presented meaningful results, it lacks both in reliability and robustness due to
the limitations imposed by the assumptions. On the one hand, the use of single image methods to restore
images can enhance the quality, but on the other hand, it is susceptible to the variations in scene
characteristics. Thus, one important direction is to use the information provided by the image to estimate
a confidence level, which would prove to be useful in practical applications, e.g. robotics. Another
important direction to be pursued is to use image sequences to disambiguate the parameters of the
model. Video acquisition is a common capability in almost all types of underwater cameras commonly
used by divers and ROVs. In this case, a single image restoration method can be used for initial estimation,
which would be followed by successive refinements as other images become available. Finally, for several
applications, it might be necessary to enhance the model with the inclusion of the effect of artificial
illumination in the scene. It will enable us to deal with deep-water conditions.
Fig. 8 Image restoration of three underwater videos acquired in Brazilian Northeast Coastal area. The first row
shows three sample images for each video. The restoration results for these sample images obtained by the
standard DCP, UDCP and MDCP are shown in the second, third and fourth rows, respectively.
[1] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior, in IEEE CVPR, pages 19561963,
[2] P. Drews-Jr, E. Nascimento, F. Moraes, S. Botelho, M. Campos, Transmission estimation in underwater single
images, in IEEE ICCV - Workshop on Underwater Vision, 2013, pp. 825830.
[3] Y. Schechner and N. Karpel. Recovery of underwater visibility and structure by polarization analysis. IEEE JOE,
30(3):570587, 2005.
[4] R. Fattal. Single image dehazing. ACM TOG, 27(3), 2008.
[5] L. Chao and M. Wang. Removal of water scattering, in ICCET, volume 2, pages 3539, 2010.
[6] J. Chiang and Y. Chen. Underwater image enhancement by wavelength compensation and dehazing. IEEE TIP,
21(4):17561769, 2012.
[7] S. Serikawa and H. Lu. Underwater image dehazing using joint trilateral filter. Computers & Electrical
Engineering, 40(1): 4150, 2014.
[8] K. Gibson, D. Vo, and T. Nguyen. An investigation of dehazing effects on image and video coding. IEEE TIP,
21(2):662673, 2012.
[9] N. Carlevaris-Bianco, A. Mohan, and R. Eustice. Initial results in underwater single image dehazing, in IEEE
OCEANS, pages 18, 2010.
[10] J. Xiao, J. Hays, K. Ehinger, A. Oliva, A. Torralba. SUN database: Large-scale scene recognition from abbey to
zoo, in IEEE CVPR, pp. 34853492, 2010.
[11] N. Hautière, J.-P. Tarel, D. Aubert and E. Dumont. Blind contrast enhancement assessment by gradient ratioing
at visible edges. Image Analysis & Stereology, 27(2): 8795, 2008.
[12] C. Ancuti, C. Ancuti, T. Haber, and P. Bekaert. Enhancing underwater images and videos by fusion. In IEEE
CVPR, pages 81-88, 2012.
... Li et al. [12] utilized the histogram characteristics of images to process underwater images. Drews et al. [13] considered the degradation mode of underwater images and proposed a variant of DCP (UDCP). However, the method is not effective in the presence of white objects or artificial light. ...
... In this section, we perform qualitative and quantitative evaluations to assess the effectiveness of the proposed algorithm. We compare our algorithm with several existing advanced underwater image-enhancement methods, namely ARC [11], Fusion [18], GDCP [16], IBLA [14], TS [20], UDCP [13], MLLE [22], and ACCDO [23]. We used a UIEB [35,36] dataset with 950 real-world underwater images for evaluation, of which 890 images had corresponding reference images, referred to as UIEBR, and the remaining 60 underwater images were referred to as UIEBC. ...
Full-text available
Color distortion, low contrast, and blurry details are the main features of underwater images, which can have adverse effects on their quality. To address these issues, a novel enhancement method based on color correction and multiscale fusion is proposed to improve underwater image quality, achieving color correction, contrast enhancement, and detail sharpening at different stages. The method consists of three main steps: color correction using a simple and effective histogram equalization-based method to correct color distortion, decomposition of the V channel of the color-corrected image into low- and high-frequency components using a guided filter, enhancement of the low-frequency component using a dual-interval histogram based on a benign separation threshold strategy, and a complementary pair of gamma functions; the fusion of the two versions of the low-frequency component to enhance image contrast; and finally, the design of an enhancement function to highlight image details. Comparative analysis with existing methods demonstrates that the proposed method achieves high-quality underwater images and favorable qualitative and quantitative evaluations. Compared to the method with the highest score, the average UIQM score of our method exceeds 6%, and the average UCIQE score exceeds 2%.
... A c and t c in Eq. 1, from which the clean image J c is estimated. These priors include the classical Underwater Dark Channel Prior (UDCP) [11], the Red Channel Prior (RCP) [12], the Minimum Intensity Prior (MIP) [13] and the Haze-Line Prior (HLP) [14], to name a few. For example, the UDCP, which assumes that the patches of a clean image have low intensity in at least one of the red and blue channels, is used to estimate the transmission 7 ...
... Electronic copy available at: P r e p r i n t n o t p e e r r e v i e w e d map in Eq.1 [11]. The HLP, which states that the number of colors in a clear image is small, is leveraged to correct the color cast caused by light absorption in water [14]. ...
... The dark channel prior (DCP) algorithm was proposed by He et al. [6] as early as 2011, and was based on a priori fact that the intensity of the dark channel tends to be zero in fog-free images, which is based on the foggy sky imaging model and estimates the light transmission map to remove foggy blur in the image. UDCP [7,8] for underwater images, abandons the unreliable red channel, and only considers the use of values in green channel and blue channel to estimate the transmission rate of underwater light accurately. ...
Full-text available
Underwater object detection, as the principal means of underwater environmental sensing, plays a significant part in the marine economic, military, and ecological fields. Due to the degradation problems of underwater images caused by color cast, blurring, and low contrast, we proposed a model for underwater object detection based on YOLO v7. In the presented detection model, an enhanced image branch was constructed to expand the feature extraction branch of YOLOv7, which could mitigate the feature degradation issues existing in the original underwater images. The contextual transfer block was introduced to the enhanced image branch, following the underwater image enhancement module, which could extract the domain features of the enhanced image, and the features of the original images and the enhanced images were fused before being fed into the detector. Focal EIOU was adopted as a new model bounding box regression loss, aiming to alleviate the performance degradation caused by mutual occlusion and overlapping of underwater objects. Taking URPC2020 and UTDAC2020 (Underwater Target Detection Algorithm Competition 2020) datasets as experimental datasets, the performance of our proposed model was compared against with other models, including YOLOF, YOLOv6 v3.0, DETR, Swin Transformer, and InternImage. The results show that our proposed model presents a competitive performance, achieving 80.71% and 86.32% in mAP@0.5 on URPC2020 and UTDAC2020, respectively. Comprehensively, the proposed model is capable of effectively mitigating the problems encountered in the task of object detection in underwater images with degraded features and exhibits great advancement.
... [25] proposed to use Dark Channel Prior (DCP) and the wavelength-dependent compensation method to improve the visual perception of underwater images. [26] proposed an Underwater Dark Channel Prior (UDCP) which can estimate the medium transmission. Recently, [27] incorporated adaptive color correction to the model and proposed a Generalized Dark Channel Prior (GDCP). ...
Full-text available
Underwater image enhancement has attracted much attention due to the rise of underwater vision research in recent years. In real-world underwater scene, the images are always with color distortion and low brightness and contrast because of light scattering and absorption, which hinders the practical applications of underwater images. To improve the quality of visual underwater scenes, in this paper, we introduced a Multi-Task Cascaded Network (MTNet) for underwater image enhancement, which contains three cascaded sub-tasks, namely color reconstruction task, contrast reconstruction task and content reconstruction task. For each task, the color loss, Hue Saturation Value (HSV) loss, Structure Similarity Index Measure (SSIM) loss and image gradient loss are employed to train MTNet in an end-to-end way. Furthermore, we design an Adaptive Fusion Module (AFM) to fuse the feature maps from different reconstruction task adaptively. To verify the performance of MTNet, we conducted the comparative experiments on both synthetic underwater images and real world underwater images. Experimental results show that our proposed method achieves better performance in both quantitative and qualitative evaluations.
... (2) two-step-based [25]; (3) retinex-based [26]; (4) DCP; (5) UDCP [27]; (6) regressionbased [28]; (7) GDCP [29]; (8) red channel-based [30]; (9) histogram prior [31]; (10) blurrinessbased [32]; (11) MSCNN [20]; and (12) dive+ [33]. We split the UIEB dataset into training, validation, and testing sets in a ratio of 700:90:100, respectively. ...
Full-text available
This paper proposes DFFA-Net, a novel differential convolutional neural network designed for underwater optical image dehazing. DFFA-Net is obtained by deeply analyzing the factors that affect the quality of underwater images and combining the underwater light propagation characteristics. DFFA-Net introduces a channel differential module that captures the mutual information between the green and blue channels with respect to the red channel. Additionally, a loss function sensitive to RGB color channels is introduced. Experimental results demonstrate that DFFA-Net achieves state-of-the-art performance in terms of quantitative metrics for single-image dehazing within convolutional neural network-based dehazing models. On the widely-used underwater Underwater Image Enhancement Benchmark (UIEB) image dehazing dataset, DFFA-Net achieves a peak signal-to-noise ratio (PSNR) of 24.2631 and a structural similarity index (SSIM) score of 0.9153. Further, we have deployed DFFA-Net on a self-developed Remotely Operated Vehicle (ROV). In a swimming pool environment, DFFA-Net can process hazy images in real time, providing better visual feedback to the operator. The source code has been open sourced.
... Early approaches [2,3,23] mainly rely on a physical model, such as Retinex model [19] or underwater degradation imaging model [30]. These methods are able to cope with some simple underwater scenes like shallow water areas, however, for more complex scenes, such as heavy blurriness or low illumination scenes, it is difficult to be used because a single physical model is not appropriate to explain the real degradation. ...
Full-text available
In this paper, we present an approach to image enhancement with diffusion model in underwater scenes. Our method adapts conditional denoising diffusion probabilistic models to generate the corresponding enhanced images by using the underwater images and the Gaussian noise as the inputs. Additionally, in order to improve the efficiency of the reverse process in the diffusion model, we adopt two different ways. We firstly propose a lightweight transformer-based denoising network, which can effectively promote the time of network forward per iteration. On the other hand, we introduce a skip sampling strategy to reduce the number of iterations. Besides, based on the skip sampling strategy, we propose two different non-uniform sampling methods for the sequence of the time step, namely piecewise sampling and searching with the evolutionary algorithm. Both of them are effective and can further improve performance by using the same steps against the previous uniform sampling. In the end, we conduct a relative evaluation of the widely used underwater enhancement datasets between the recent state-of-the-art methods and the proposed approach. The experimental results prove that our approach can achieve both competitive performance and high efficiency. Our code is available at \href{mailto:}{\color{blue}{\_underwater}}.
... Step [24], Retinex [25], Deep WaveNet [71], UDCP [21], Dive+ 6 , AcquaColor 7 , USRCT-SESR [68], WaterNet [53], Ushape Transformer (UshapeTrans) [65], FUnIEGAN [37], DeepSESR [36], and BOTH [56]. Since the proposed UVOT400 is a video-based dataset, therefore each video is divided into 10 segments of equal length, and one random frame is selected from each segment. ...
Full-text available
This paper presents a new dataset and general tracker enhancement method for Underwater Visual Object Tracking (UVOT). Despite its significance, underwater tracking has remained unexplored due to data inaccessibility. It poses distinct challenges; the underwater environment exhibits non-uniform lighting conditions, low visibility, lack of sharpness, low contrast, camouflage, and reflections from suspended particles. Performance of traditional tracking methods designed primarily for terrestrial or open-air scenarios drops in such conditions. We address the problem by proposing a novel underwater image enhancement algorithm designed specifically to boost tracking quality. The method has resulted in a significant performance improvement, of up to 5.0% AUC, of state-of-the-art (SOTA) visual trackers. To develop robust and accurate UVOT methods, large-scale datasets are required. To this end, we introduce a large-scale UVOT benchmark dataset consisting of 400 video segments and 275,000 manually annotated frames enabling underwater training and evaluation of deep trackers. The videos are labelled with several underwater-specific tracking attributes including watercolor variation, target distractors, camouflage, target relative size, and low visibility conditions. The UVOT400 dataset, tracking results, and the code are publicly available on:
Conference Paper
Full-text available
This paper proposes a methodology to estimate the transmission in underwater environments which consists on an adaptation of the Dark Channel Prior (DCP), a statistical prior based on properties of images obtained in outdoor natural scenes. Our methodology, called Underwater DCP (UDCP), basically considers that the blue and green color channels are the underwater visual information source, which enables a significant improvement over existing methods based in DCP. This is shown through a comparative study with state of the art techniques, we present a detailed analysis of our technique which shows its applicability and limitations in images acquired from real and simulated scenes.
Conference Paper
Full-text available
As light is transmitted from subject to observer it is absorbed and scattered by the medium it passes through. In mediums with large suspended particles, such as fog or turbid water, the effect of scattering can drastically decrease the quality of images. In this paper we present an algorithm for removing the effects of light scattering, referred to as dehazing, in underwater images. Our key contribution is to propose a simple, yet effective, prior that exploits the strong difference in attenuation between the three image color channels in water to estimate the depth of the scene. We then use this estimate to reduce the spatially varying effect of haze in the image. Our method works with a single image and does not require any specialized hardware or prior knowledge of the scene. As a by-product of the dehazing process, an up-to-scale depth map of the scene is produced. We present results over multiple real underwater images and over a controlled test set where the target distance and true colors are known.
Full-text available
In this paper, we propose a simple but effective image prior - dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of the haze-free outdoor images. It is based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high quality haze-free image. Results on a variety of outdoor haze images demonstrate the power of the proposed prior. Moreover, a high quality depth map can also be obtained as a by-product of haze removal.
Full-text available
Light scattering and color change are two major sources of distortion for underwater photography. Light scattering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera. This in turn lowers the visibility and contrast of the image captured. Color change corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. No existing underwater processing techniques can handle light scattering and color change distortions suffered by underwater images, and the possible presence of artificial lighting simultaneously. This paper proposes a novel systematic approach to enhance underwater images by a dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artifical light source into consideration. Once the depth map, i.e., distances between the objects and the camera, is estimated, the foreground and background within a scene are segmented. The light intensities of foreground and background are compared to determine whether an artificial light source is employed during the image capturing process. After compensating the effect of artifical light, the haze phenomenon and discrepancy in wavelength attenuation along the underwater propagation path to camera are corrected. Next, the water depth in the image scene is estimated according to the residual energy ratios of different color channels existing in the background light. Based on the amount of attenuation corresponding to each light wavelength, color change compensation is conducted to restore color balance. The performance of the proposed algorithm for wavelength compensation and image dehazing (WCID) is evaluated both objectively and subjectively by utilizing ground-truth color patches and video downloaded from the Youtube website. Both results demonstrate that images with significantly enhanced visibility and superior color fidelity are obtained by the WCID proposed.
This paper describes a novel strategy to enhance underwater videos and images. Built on the fusion principles, our strategy derives the inputs and the weight measures only from the degraded version of the image. In order to overcome the limitations of the underwater medium we define two inputs that represent color corrected and contrast enhanced versions of the original underwater image/frame, but also four weight maps that aim to increase the visibility of the distant objects degraded due to the medium scattering and absorption. Our strategy is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. Our fusion framework also supports temporal coherence between adjacent frames by performing an effective edge preserving noise reduction strategy. The enhanced images and videos are characterized by reduced noise level, better exposed-ness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, the utility of our enhancing technique is proved for several challenging applications.
This paper describes a novel method to enhance underwater images by image dehazing. Scattering and color change are two major problems of distortion for underwater imaging. Scattering is caused by large suspended particles, such as turbid water which contains abundant particles. Color change or color distortion corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. Our key contributions are proposed a new underwater model to compensate the attenuation discrepancy along the propagation path, and proposed a fast joint trigonometric filtering dehazing algorithm. The enhanced images are characterized by reduced noised level, better exposedness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, our method is comparable to higher quality than the state-of-the-art methods by assuming in the latest image evaluation systems.
Underwater imaging is important for scientific research and technology as well as for popular activities, yet it is plagued by poor visibility conditions. In this paper, we present a computer vision approach that removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. It is shown that the main degradation effects can be associated with partial polarization of light. Then, an algorithm is presented, which inverts the image formation process for recovering good visibility in images of scenes. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by-product, a distance map of the scene is also derived. In addition, this paper analyzes the noise sensitivity of the recovery. We successfully demonstrated our approach in experiments conducted in the sea. Great improvements of scene contrast and color correction were obtained, nearly doubling the underwater visibility range.
Conference Paper
In this paper, an efficient and effective method is proposed using dark channel prior to restore the original clarity of the images underwater. Images taken in the underwater environment are subject to water attenuation and particles in water's scattering, a phenomenon similar to the effect of heavy fog in the air. Using dark channel prior, the depth of the turbid water can be estimated by the assumption that most local patches in water-free images contain some pixels which have very low intensities in at least one color channel. In this way, the effect of turbid water can be removed and the original clarity of images can be unveiled. The results processed by this method are presented in the paper.
Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes.
This paper makes an investigation of the dehazing effects on image and video coding for surveillance systems. The goal is to achieve good dehazed images and videos at the receiver while sustaining low bitrates (using compression) in the transmission pipeline. At first, this paper proposes a novel method for single-image dehazing, which is used for the investigation. It operates at a faster speed than current methods and can avoid halo effects by using the median operation. We then consider the dehazing effects in compression by investigating the coding artifacts and motion estimation in cases of applying any dehazing method before or after compression. We conclude that better dehazing performance with fewer artifacts and better coding efficiency is achieved when the dehazing is applied before compression. Simulations for Joint Photographers Expert Group images in addition to subjective and objective tests with H.264 compressed sequences validate our conclusion.