ArticlePDF Available

Abstract and Figures

The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted due to the large speckle noise. To address this problem, this paper proposes a new method using two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and rotations are separately thresholded with an adaptive data-driven thresholding method, then, each thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial dictionary dependent on the size of curvelet coefficients’ matrix in each sub-band. We also modify each coefficient matrix to enhance intra-retinal layers, with noise suppression at the same time. We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD), and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio from 3.20 to 14.41 are obtained.
Content may be subject to copyright.
86 © 2017 Journal of Medical Signals & Sensors | Published by Wolters Kluwer ‑ Medknow
Address for correspondence:
Dr. Hossein Rabbani,
Department of Advanced
Medical Technology, Isfahan
University of Medical Sciences,
Isfahan, Iran.
E-mail: h_rabbani@med.mui.
ac.ir
Website: www.jmss.mui.ac.ir
Abstract
The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted
due to the large speckle noise. To address this problem, this paper proposes a new method using
two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast
enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take
curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and
rotations are separately thresholded with an adaptive data-driven thresholding method, then, each
thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial
dictionary dependent on the size of curvelet coefcients’ matrix in each sub-band. We also modify
each coefcient matrix to enhance intra-retinal layers, with noise suppression at the same time.
We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically
available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD),
and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio
from 3.20 to 14.41 are obtained.
Keywords: Curvelet transform, dictionary learning, optical coherence tomography, speckle noise
Speckle Noise Reduction in Optical Coherence Tomography Using Two-
dimensional Curvelet-based Dictionary Learning
Original Article
Mahdad Esmaeili,
Alireza Mehri
Dehnavi,
Hossein Rabbani,
Fedra Hajizadeh1
Department of Advanced
Medical Technology, Isfahan
University of Medical
Sciences, Isfahan, 1Noor
Ophthalmology Research Center,
Noor Eye Hospital, Tehran, Iran
How to cite this article: Esmaeili M, Dehnavi AM,
Rabbani H, Hajizadeh F. Speckle noise reduction in
optical coherence tomography using two-dimensional
curvelet-based dictionary learning. J Med Sign Sence
2017;7:86-91.
This is an open access arcle distributed under the terms of the
Creave Commons Aribuon-NonCommercial-ShareAlike 3.0
License, which allows others to remix, tweak, and build upon the
work non-commercially, as long as the author is credited and the
new creaons are licensed under the idencal terms.
For reprints contact: reprints@medknow.com
Introduction
Spectral domain optical coherence
tomography (SD-OCT) is a high-resolution,
noninvasive imaging technique in the
identication and assessment of internal
structures of retinal abnormalities and
to image various aspects of biological
tissues with high resolving power (5 µm
resolution in depth).[1,2] The main problem
regarding these images is in their inherent
corruption by speckle noise due to its
coherent detection nature. Traditional
digital ltering methods including median
and Lee ltering,[3] adaptive median and
Wiener ltering,[4,5] and iterative maximum
a posteriori-based algorithm[6] were
employed for reducing speckle noise. These
methods provide inadequate noise reduction
under the high speckle noise contamination,
as well as result in the meaningful loss
of faint features. In recent years, some
other approaches have been explored for
speckle noise reduction such as anisotropic
diffusion-based methods,[7-10] wavelet-
based methods,[11,12] curvelet shrinkage
technique,[13] dictionary learning-based
denoising,[14,15] and robust principal
component analysis-based method.[16]
However, the necessity of the development
of more advanced methods to provide
minimum detail loss under suppression
high speckle noise makes the speckle noise
reduction as an important part of the OCT
image processing.
Here, a novel speckle noise reduction
algorithm is developed, which is
optimized OCT image despeckling while
preserving strong edge sharpness. For this
purpose, we introduce K-SVD dictionary
learning in curvelet transform (CUT)
domain for speckle noise reduction of
two-dimensional (2D) OCT images. As
the low scale of curvelet coefficients
are more affected by the noise and to
take advantage of this sparse multiscale
directional transform, we introduce a
new scheme in dictionary learning and
take CUT of noisy image, then a nearly
optimal threshold for thresholding of
curvelet coefficients for each scale and
rotation is found based on the standard
deviation of each coefficient matrix.
Thresholded coefficients are employed,
and a curvelet-based K-SVD with
varying size dictionary dependent on the
scale and rotation of coefficient matrix is
introduced. This method does not need
Esmaeili, et al.: Speckle noise reduction by the use of curvelet-based dictionary learning
Journal of Medical Signals & Sensors | Volume 7 | Issue 2 | April‑June 2017 87
any high signal-to-noise ratio (SNR) scans (a fraction
of repeated B scans from a unique position are captured
slowly, then these images are registered and averaged
to create a less noisy image with a sufficiently high
SNR) for dictionary learning, which is used in other
works.[14,15]
The paper is organized as follows. Section 2 provides an
introduction to 2D digital CUT (DCUT). In Section 3, we
describe the principles of conventional dictionary learning.
Our proposed method is described in Section 4, and the
results and performance evaluation are presented in Section
5. Finally, this paper is concluded in Section 6.
Two-dimensional Digital Curvelet Transform
The CUT is a high-dimensional time–frequency analysis
of images that gives a sparse representation of objects,
and it has been developed to overcome the inherent
limitations of conventional multiscale representations
such as wavelets (e.g., poor directional selectivity). The
directional selectivity of curvelets and localized spatial
property of each curvelet can be utilized to preserve the
image features along certain directions in each sub-band.
The good directional selectivity, tightness, and sparse
representation properties of this multiscale transform give
new opportunities to analyze and study large datasets in
medical image processing.[17]
This transform can be implemented by employing two
simpler, faster, and less redundant methods, i.e., the
unequally-spaced fast Fourier transform (USFFT) and
the wrapping transform.[17,18] The main difference of these
implementations is related to their choice of spatial grid
to construct the curvelet atoms in each subband. Both
algorithms have the same output, but the wrapping-based
transform has faster computational time and is easier to
implement than USFFT method.[18] The architecture of
CUT via wrapping is roughly presented in the following
form:
1. Take the 2D FFT of the image f and obtain Fourier
samples f
ˆ (n1, n2),…−n/2 n1, n2,…< n/2 (f is the
original image with the size of n × n)
2. For each scale/angle pair (j, l), form the product
d (n1, n2) = U
~
j,l (n1, n2) f
ˆ (n1, n2), here, U
~
j,l (n1, n2) is the
discrete localizing window[18]
3. Wrap this product around the origin and obtain
f
~
j,l(n1, n2) = W (U
~
j,l f
ˆ) (n1, n2). If the corresponding
periodization of the windowed data, i.e., d (n1, n2) is
dened as Wd nn dn mL nmL
mm
(,
),
12 11 22
2
1
=++
()
1,
j2
,j
ΖΖ
,
then at each scale j, Wd (n1, n2) is restricted to indices
(n1, n2) inside a rectangle with sides of length L1,j × L2,j
near the origin (L1,j ~ 2j and L2,j ~ 2j/2) to construct the
wrapped windowed data
4. Take the inverse 2D FFT of each f
~
j,l for collecting the
discrete coefcients, i.e., cD (j, l, k).
K-SVD Dictionary Learning For Image Denoising
Image denoising problem can be viewed as an inverse
problem. One of the most recent methods to solve an
inverse problem is a sparse decomposition over over-
complete dictionaries.[19,20] For a given set of signals
delineated by Y, suitable dictionary D can be found such
that yi Dxi, where xi is a vector which involves the
coefcients for the linear combination and yi Y. The
problem of sparse representation can thus be dened as an
optimization problem of nding D and xi, which satises:
min
,Dx
i
yD
xx
T
ii i
subjected to
−<
20
(1)
where T is a predened threshold which restraints the
sparseness of the representation and || · ||0 indicates the l0
norm which counts the number of nonzero elements of the
vector. This problem is thus involved in a selection of the
dictionary and a sparse linear combination of the atoms in
the dictionary to illustrate each desired signal. For image
denoising, the noisy image is broken up into patches and
the vectorized version of each patch is considered as a
signal. For a given image, which can be considered as a set
of signals Y, the denoising problem can be done by nding
a set of patches Z which are related by:
Y = Z + η (2)
where η is noise, which corrupts the patches.
To nd the denoised patches Z
ˆ, the following optimization
problem should be solved:[19,20]
argmin
()
,,xZD
ij
ijij
ij
YZ Dx RZ x
λµ
−+ −+
2
22
0
ij ij Fij (3)
where λ and μ are Lagrange multipliers and Rij is dened
as the matrix which selects the ijth patch from Z, i.e.,
Zij = RijZ.
The rst term in (3) makes sure that the measured image Y
is similar to its denoised version Z and the second and third
parts are sparsity-inducing regulation terms.
For solving the above equation:
1. Initialization is done by setting
Z=Y, D = initial dictionary
2. Repeat K times
For each patch, RijZ computes the representation
vector xij by using orthogonal matching pursuit
(OMP) algorithm.[21,22] The OMP is easy in
implementation and provides a satisfactory stable
result. The algorithm attempts to nd the best
basis vectors (atoms) iteratively such that the
representation error is reduced in each iteration
∀−
<
ij ij ij ij
ij
such thatmi
n(
)
xxRZD
xC
02
(4)
where C is the noise gain and σ is the standard
deviation of noise
Esmaeili, et al.: Speckle noise reduction by the use of curvelet-based dictionary learning
88 Journal of Medical Signals & Sensors | Volume 7 | Issue 2 | April‑June 2017
Once this sparse coding stage is done, the algorithm
proceeds to update the atoms l = 1, 2,…, k of the
dictionary one by one to reduce the error term. For
this purpose the set of patches that use this atom
Dl = ([i, j]|xij (l) 0) are calculated and l-th atom
from dictionary is deselected, then the coding error
matrix (El) of these signals is calculated whose
columns are:
eRXdxm
ml
ij
l
ij ij mij
=−
()
(5)
Minimize this error matrix with rank-1
approximately from SVD that El = UΔVT. Replace
coefcient values of atom Dl with entries of V1Δ1
and updated dictionary column to be D=
l
U
U
1
12
.
3. Set
T1 T
ij ij ij ij
ˆ
( )( )
ij ij
Z I R R Y R Dx

=++
∑∑
(6)
Proposed Denoising Method
Our curvelet-based approach consists of rst taking the
2D forward DCUT of noisy image to produce the curvelet
coefcients, then for each sub-band in the transform
domain, the coefcients’ matrix is independently denoised
based on K-SVD dictionary learning with the initial
dictionary of discrete cosine transform (DCT), in which its
size is specic for each scale.
In the proposed method for efcient representation of
different structures in image, we select initial dictionary
to be variable in size (depends on the size of curvelet
coefcients’ matrix in each sub-band) instead of traditional
xed form. By increasing the scale of curvelet coefcients’
matrix (or reducing in resolution), the block size (indicates
the size of the blocks to operate on) is also increased,
while in high resolutions, the block size is reduced which
results in better representation of particular structure in
image.
The proposed method for image denoising is as follows.
Forward Digital Curvelet Transform
Take the 2D CUT of the data to produce the curvelet
coefcients C (j, l) (j is the scale and l is the orientation).
According to our image size (512 × 1000), each image is
decomposed into six scales (it is recommended to take the
number of scales to be equal or less than the default value,
(log2
[min(M, N)] 3), here M, N is the image size and
x denotes the smallest integer being greater than or equal
to x) then each scale is further partitioned into a number
of orientations. The number of orientations is l=1, n, 2n,
2n, 4n, 4n from ner to coarser scales, where n is the
number of orientation at the second scale.
Initial denoising
For each scale, apply hard thresholding based on the
standard deviation of each scale. The hard threshold Tj,l to
each curvelet coefcient is selected such that:
CjlCj
lT
Cjl
(,)([,])
(,)
=
0ifabs
else
j,l (7)
The threshold Tj,l is selected based on the standard deviation
of selected coefcient matrix (C) in that scale and rotation
(Tj,l = 0.5 standard deviation [C]).
Initial dictionary selection and K-SVD dictionary
learning denoising
For each 2D-coefcient matrix in each scale and rotation,
the varying size initial dictionary for each scale is chosen
by employing DCT on each sub-band.
We let the block size in dictionary learning to be dependent
on the size of each coefcient matrix, so the dictionary size
also varies with block size. After nding the appropriate 2D
initial dictionary, D, for each sub-band, the noisy curvelet
coefcient matrices of noisy image in the same scale
and rotation are despeckled based on K-SVD dictionary
learning as described in Section 3.
According to the size of curvelet coefcient matrix C, the
dictionary size and block size are set empirically to be:
Blocksize=
.min (,)5 mn (8)
Dictionary size = Block size.^3 (9)
where m, n are respectively the number of rows and
columns of coefcient matrix C and [x] indicates the largest
integer smaller than or equal to x.
Contrast enhancement
Since the CUT is successful in dealing with edge
discontinuities, it is a good candidate for edge
enhancement. Hence, to enhance the contrast of intra-
retinal layer boundaries, denoised curvelet coefcients
can be modied to enhance edges in a B-scan image,[23,24]
before taking 2D inverse discrete CUT (2D-IDCUT). For
OCT images, a function kc (Cj,l) denes empirically that
is similar to function dened by Starck for gray and color
image enhancement,[25] which modies the values of the
curvelet coefcients as follows:
Kx
xxN
xs NxN
sx
Nx
c
if abs
if abs
if abs
()
.()
.(
)(
)
() ()
=
<
<<
<
14
83
3
1
2
(10)
In this equation, N = 0.1M, where M is the maximum
curvelet coefcient of the relative band, and s1 and s2 are
dened as follows:
Esmaeili, et al.: Speckle noise reduction by the use of curvelet-based dictionary learning
Journal of Medical Signals & Sensors | Volume 7 | Issue 2 | April‑June 2017 89
sxN
N
M
N
Nx
N
1
22
=
+()
()
.
(11)
sM
x
2
2
=
.
(12)
Converting to image domain
Then, we reconstruct the enhanced image from the
denoised and modied curvelet coefcients by applying
IDCUT. The outline of the whole denoising process is
shown in Figure 1.
Results
We tested our algorithm on 100 selected 2D OCT B-scans
of size 512 × 1000 from publically available datasets[15,26]
that were acquired using SD-OCT, Bioptigen imaging
systems, with and without non-neovascular AMD. For
the better representation of image details in low scale,
high-frequency components, the block size is selected
to be dependent on the scale of coefcient matrix. On
the other hand, for low scales, the coefcient matrix is
small in size and the size of this matrix will be increased
in high scale, low-frequency components of the image.
Figure 2 demonstrates the samples of the variable size
initial dictionaries in curvelet domain used for K-SVD-
based denoising of each curvelet sub-band.
Figure 3 shows the reconstructed OCT images from
curvelet-based K-SVD enhancement method.
For K-SVD denoising in each scale and rotation of curvelet
coefcients, 1000 patches with equal distance between
the samples in each dimension are selected. To obtain a
compromise between having enough iterations to obtain a
good result and having a correct processing time, we set
K empirically to be 15 for our dataset. According to Eqs.
3 and 4, we also set C = 1.15 and λ = 30/σ, where σ is
selected to be 25 for our dataset.
To compare the performance of different denoising
algorithms quantitatively, we compute the averaged mean
SNR[27] and contrast-to-noise ratio[28] obtained from ten
regions of interest (ROIs) from B-scan OCT images
[similar to the foreground ellipse boxes in Figure 4].
The algorithm that has been implemented in MATLAB
requires around 2 min of computation time for denoising
each 512 × 1000 B-scan on an Intel (R) Core i7 CPU with
4 GB of RAM. The drawback of the proposed method[15] is
its time complexity so that it takes more than 31 min for
denoising each B-scan on the same Intel (R) Core i7 system.
Table 1 compares the quantitative performance measure
values of our method with those from the available
well-known denoising approaches[15] such as: Tikonov,[6]
K-SVD[14] and multiscale sparsity-based tomographic
denoising approach[15] (the reported results[15] are on
17 images of this dataset). To show the ability of proposed
method in edge preserving, the average of the edge
preservation (EP)[29] measure over the selected ROIs is
obtained, that is, 0.83 ± 0.01. This EP measure ranges
between 0 and 1, having smaller values when the edges
inside the ROI are more blurred.
Figure 5 also shows the visual performance of our proposed
method in comparison with some traditional state-of-theart
denoising methodssuch as: Bernardes method[10] Tikonov,[6]
and multiscale sparsity-based tomographic denoising
algorithm.[15]
To show the ability of our proposed method
(DCUT + K-SVD), we have demonstrated the reconstructed
image with thresholded curvelet coefcients and the ability
of K-SVD in image domain (block size = 8, dictionary
size = 256) for noise suppression in Figure 6. Table 2 also
compares the quantitative performance measure values of
our method with thresholded curvelet coefcients (without
dictionary learning) and the K-SVD-based denoising
(without CUT) method.
Figure 1: The outline of the proposed method
Figure 2: Samples of the trained two-dimensional initial dictionaries in
K-SVD-based denoising of each curvelet coefcient matrix. The block size
in (a) is 3 × 3 and in (b) is 4 × 4
b
a
Esmaeili, et al.: Speckle noise reduction by the use of curvelet-based dictionary learning
90 Journal of Medical Signals & Sensors | Volume 7 | Issue 2 | April‑June 2017
Conclusion
Speckle noise in OCT images causes difculty in the
actual recognition of morphological characteristics which
can be viewed and quantied using OCT tomograms, such
as the thickness of intra-retinal layers and the shape of
structural features (e.g., drusens, macular holes, macular
edema, and nerve ber atrophy and cysts, which can be
used as markers in clinical investigation and diagnostics of
retinal diseases). Hence, to suppress noise while preserving
and enhancing the edges and to consider the geometric
properties of structures and exploit the regularity of edges,
we introduced a new curvelet-based K-SVD despeckling
and contrast enhancement method for OCT datasets. We
discussed the application of dictionary learning along
with CUT for denoising of SD-OCT of normal and
AMD retinal images. Our proposed method also does
not need any high-SNR scans or any repeated scans (or
averaged versions of scans) for dictionary learning (since
in some cases, there is no access to the averaged frames).
Moreover, since the proposed method decomposes the
image into lower dimension sub-components, we achieved
a signicant reduction of computational time by reducing
the size of initial dictionary to be dependent with the
size of each scale. The EP value also shows that the
proposed method can preserve edges very well while
removing speckle noise. As OCT is a medical imaging
technique that captures three-dimensional (3D) images
from within optical scattering media, it seems that the
direct analyzing of 3D images with 3D sparse transforms
and also considering the 3D geometrical nature of the
data outperform analyzing 2D slice-by-slice, which is our
ongoing research to extend this work to 3D domain.
Financial support and sponsorship
Nil.
Conicts of interest
There are no conicts of interest.
Figure 4: Selected background and foreground regions of interest for
evaluation. Bigger ellipse outside the retinal region is used as background
region of interest and other circles represent foreground regions of interest
Figure 6: Visual comparison of our proposed method with thresholded
digital curvelet transform coefficients and K-SVD (a) initial images,
(b) reconstructed image with thresholded digital curvelet transform
coefficients, (c) obtained images by using K-SVD on image domain
(d) proposed method
d
c
b
a
Figure 3: The implementation of proposed method (a and c) initial images
and (b and d) obtained images by proposed method
d
c
b
a
Figure 5: Visual performance for spectral domain optical coherence
tomography retinal image using Bernardes, Tikhonov, multiscale sparsity-
based tomographic denoising, and the proposed method. (a) Original noisy
image (b) denoising results using the Bernardes method. (c) Denoising
results using the Tikhonov method. (d) Denoising results using the
multiscale sparsity-based tomographic denoising. (e) Proposed method
d
c
b
a
e
Table 1: Mean and standard deviation of the mean signal-to-noise ratio and contrast-to-noise ratio results for 17
spectral domain optical coherence tomography retinal images using the Tikonov, K-SVD, multiscale sparsity-based
tomographic denoising, and proposed methods
Original Tikhonov[6] K-SVD[14] MSBTD[15] Proposed method
Mean (CNR) 1.27 3.26 4.11 4.76 5.12
STD (CNR) 0.43 0.22 1.23 1.54 1.81
Mean (MSR) 3.20 7.64 11.22 14.76 14.41
STD (MSR) 0.46 0.63 2.77 4.75 4.12
MSR – Mean signal-to-noise ratio; CNR – Contrast-to-noise ratio; MSBTD – Multiscale sparsity-based tomographic denoising
Esmaeili, et al.: Speckle noise reduction by the use of curvelet-based dictionary learning
Journal of Medical Signals & Sensors | Volume 7 | Issue 2 | April‑June 2017 91
References
1. Schmitt JM. Optical coherence tomography (OCT): A review.
IEEE J Sel Top Quantum Electron 1999;5:1205-15.
2. Welzel J. Optical coherence tomography in dermatology: A
review. Skin Res Technol 2001;7:1-9.
3. Ozcan A, Bilenca A, Desjardins AE, Bouma BE, Tearney GJ.
Speckle reduction in optical coherence tomography images
using digital ltering. J Opt Soc Am A Opt Image Sci Vis
2007;24:1901-10.
4. Loupas T, McDicken W, Allan P. An adaptive weighted median
lter for speckle suppression in medical ultrasonic images. IEEE
Trans Circuits Syst 1989;36:129-35.
5. Portilla J, Strela V, Wainwright MJ, Simoncelli EP. Adaptive
Wiener Denoising using a Gaussian Scale Mixture Model in the
Wavelet Domain. IEEE Int Conf Image Process 2001;2:37-40.
6. Chong GT, Farsiu S, Freedman SF, Sarin N, Koreishi AF,
Izatt JA, et al. Abnormal foveal morphology in ocular albinism
imaged with spectral-domain optical coherence tomography.
Arch Ophthalmol 2009;127:37-44.
7. Aja S, Alberola C, Ruiz J. Fuzzy anisotropic diffusion for
speckle ltering. IEEE Int Conf Acoust Speech Signal Process
2001;2:1261-4.
8. Puvanathasan P, Bizheva K. Interval type-II fuzzy anisotropic
diffusion algorithm for speckle noise reduction in optical
coherence tomography images. Opt Express 2009;17:733-46.
9. Yu Y, Acton ST. Speckle reducing anisotropic diffusion. IEEE
Trans Image Process 2002;11:1260-70.
10. Bernardes R, Maduro C, Serranho P, Araújo A, Barbeiro S,
Cunha-Vaz J. Improved adaptive complex diffusion despeckling
lter. Opt Express 2010;18:24048-59.
11. Luisier F, Blu T, Unser M. A new SURE approach to image
denoising: Interscale orthonormal wavelet thresholding. IEEE
Trans Image Process 2007;16:593-606.
12. Chitchian S, Fiddy MA, Fried NM. Denoising during optical
coherence tomography of the prostate nerves via wavelet
shrinkage using dual-tree complex wavelet transform. J Biomed
Opt 2009;14:014031.
13. Jian Z, Yu Z, Yu L, Rao B, Chen Z, Tromberg BJ. Speckle
attenuation in optical coherence tomography by curvelet
shrinkage. Opt Lett 2009;34:1516-8.
14. Elad M, Aharon M. Image denoising via sparse and redundant
representations over learned dictionaries. IEEE Trans Image
Process 2006;15:3736-45.
15. Fang L, Li S, Nie Q, Izatt JA, Toth CA, Farsiu S. Sparsity based
denoising of spectral domain optical coherence tomography
images. Biomed Opt Express 2012;3:927-42.
16. Luan F, Wu Y. Application of rpca in optical coherence
tomography for speckle noise reduction. Laser Phys Lett
2013;10:035603.
17. Candes E, Demanet L, Donoho D, Lexing Y,Fast discrete
curvelet transforms. Multiscale Model Simul 2006;5:861-99.
18. Ma J, Plonka G. A review of curvelets and recent applications.
IEEE Signal Process Mag 2010;27:118-33.
19. Aharon M, Elad M, Bruckstein A. The K-SVD: An algorithm
for designing overcomplete dictionaries for sparse representation.
IEEE Trans Signal Process 2006;54:4311-22.
20. Chatterjee P, Milanfar IP.Denoising using the K-SVD
method. Course Web Pages, EE 264: Image Processing and
Reconstruction, 2007;264:1-12.
21. Mallat SG, Zhang Z. Matching pursuits with time-frequency
dictionaries. IEEE Trans Signal Process 1993;41:3397-415.
22. Pati YC, Rezaiifar R, Krishnaprasad P. Orthogonal Matching
Pursuit: Recursive Function Approximation with Applications
to Wavelet Decomposition. The Twenty-Seventh IEEE Asilomar
Conference on Signals, Systems and Computers; 1993. p. 40-4.
23. Esmaeili M, Rabbani H, Dehnavi AM. Automatic optic disk
boundary extraction by the use of curvelet transform and
deformable variational level set model. Pattern Recognit
2012;45:2832-42.
24. Esmaeili M, Rabbani H, Mehri A, Dehghani A. Extraction
of Retinal Blood Vessels by Curvelet Transform. 16th IEEE
International Conference on Image Processing (ICIP); 2009. p.
3353-6.
25. Starck JL, Murtagh F, Candès EJ, Donoho DL. Gray and color
image contrast enhancement by the curvelet transform. IEEE
Trans Image Process 2003;12:706-17.
26. Farsiu S, Chiu SJ, O’Connell RV, Folgar FA, Yuan E,
Izatt JA, et al. Quantitative classication of eyes with and
without intermediate age-related macular degeneration using
optical coherence tomography. Ophthalmology 2014;121:162-72.
27. Cincotti G, Loi G, Pappalardo M. Frequency decomposition
and compounding of ultrasound medical images with wavelet
packets. IEEE Trans Med Imaging 2001;20:764-71.
28. Bao P, Zhang L. Noise reduction for magnetic resonance images
via adaptive multiscale products thresholding. IEEE Trans Med
Imaging 2003;22:1089-99.
29. Pizurica A, Jovanov L, Huysmans B,Philips W. Multiresolution
denoising for optical coherence tomography: A review and
evaluation. Curr Med Imaging Rev 2008;4:270-84.
Table 2: Mean signal-to-noise ratio and contrast-to-noise
ratio results of our proposed method in comparison with
reconstructed image from thresholded digital curvelet
transform coefcients and K‑SVD‑based denoising
method in an image of Figure 6
Original Thresholded
DCUT
K-SVD Proposed
method
CNR 1.17 3.53 4.23 5.03
MSR 3.35 9.21 11.03 14.12
DCUT – Digital curvelet transform; MSR – Mean signal to noise
ratio; CNR – Contrast- to-noise ratio
... [7,8] To reduce speckle noise from OCT images, many traditional methods such as adaptive median and Wiener filtering, [9,10] median, and Lee filtering [11][12][13][14] are suggested but these methods are often obscure in details and affect edges in an image. In this paper, we have used a new 2-dimensional (2D) curvelet-based K-SVD algorithm [15] to speckle noise reduction. Even though this method enhances intraretinal layers, with noise suppression and optimally despeckling OCT image, the texture preservation (TP) parameter, which is a measure of retaining texture in a region of interest (ROI), seems not to be satisfactory (TP would be close to 0 for severely flattened image and remains close to 1 at its best). ...
... Also, the visual comparison of these different denoising methods is illustrated in Figure 7. When the structures of the image were more flattened and the edges inside the ROI were more blurred, these measurements had smaller Table 2: Mean and standard deviation of the edge preservation, texture preservation, mean to standard deviation ratio, contrast to noise ratio, and equivalent number of look for 17 spectral domain optical coherence tomography retinal images by the use of three dimensional CWDL, [47] Tikhonov, [48] MSBTD, [46] K-SVD, [47] K-SVD based DCUT, [15] and proposed method Original 3D CWDL [47] Tikhonov [48] MSBTD [46] K-SVD [47] K-SVD based DCUT [15] Proposed method Mean±STD (EP) [49] 1±0 At first, the mentioned data consisting of 60 EDI-OCT images were manually segmented by a retinal ophthalmologist, then the automatic segmentation results of the choroid were compared to the manual segmentation applying dice similarity coefficient (DSC). DSC is a statistical metric for comparing the similarity between two samples presented by Thorvald Sørensen and Lee Raymond Dice, [53] respectively in 1948 and 1945. ...
... Also, the visual comparison of these different denoising methods is illustrated in Figure 7. When the structures of the image were more flattened and the edges inside the ROI were more blurred, these measurements had smaller Table 2: Mean and standard deviation of the edge preservation, texture preservation, mean to standard deviation ratio, contrast to noise ratio, and equivalent number of look for 17 spectral domain optical coherence tomography retinal images by the use of three dimensional CWDL, [47] Tikhonov, [48] MSBTD, [46] K-SVD, [47] K-SVD based DCUT, [15] and proposed method Original 3D CWDL [47] Tikhonov [48] MSBTD [46] K-SVD [47] K-SVD based DCUT [15] Proposed method Mean±STD (EP) [49] 1±0 At first, the mentioned data consisting of 60 EDI-OCT images were manually segmented by a retinal ophthalmologist, then the automatic segmentation results of the choroid were compared to the manual segmentation applying dice similarity coefficient (DSC). DSC is a statistical metric for comparing the similarity between two samples presented by Thorvald Sørensen and Lee Raymond Dice, [53] respectively in 1948 and 1945. ...
Article
Full-text available
Background: Automatic segmentation of the choroid on optical coherence tomography (OCT) images helps ophthalmologists in diagnosing eye pathologies. Compared to manual segmentations, it is faster and is not affected by human errors. The presence of the large speckle noise in the OCT images limits the automatic segmentation and interpretation of them. To solve this problem, a new curvelet transform-based K-SVD method is proposed in this study. Furthermore, the dataset was manually segmented by a retinal ophthalmologist to draw a comparison with the proposed automatic segmentation technique. Methods: In this study, curvelet transform-based K-SVD dictionary learning and Lucy-Richardson algorithm were used to remove the speckle noise from OCT images. The Outer/Inner Choroidal Boundaries (O/ICB) were determined utilizing graph theory. The area between ICB and outer choroidal boundary was considered as the choroidal region. Results: The proposed method was evaluated on our dataset and the average dice similarity coefficient (DSC) was calculated to be 92.14% ± 3.30% between automatic and manual segmented regions. Moreover, by applying the latest presented open-source algorithm by Mazzaferri et al. on our dataset, the mean DSC was calculated to be 55.75% ± 14.54%. Conclusions: A significant similarity was observed between automatic and manual segmentations. Automatic segmentation of the choroidal layer could be also utilized in large-scale quantitative studies of the choroid.
... Filtering approached despeckling methods are designed using linear or nonlinear filters. Some of the current filtering approaches include anisotropic diffusion [5], bilateral filter [6], adaptive median [7], non-local mean filter [8], dictionary learning [9], non-local weighted group low-rank representation [10], Fuzzy Logic [11], homogeneity similarity-based method [12] and multi-frame algorithm method [13]. Design and the filtering approached methods ineffectually remove multiplicative noise and thus transform domain approach have been popular areas of research. ...
... • Weight factor is calculated and estimator is used find the noise-free pixels Dictionary learning [9] • Curvelet Transform coefficients are used to design the K-SVD dictionary learning and finding the threshold value Non-Local weighted Group Low Rank Representation (WGLRR) [10] • Low Rank Representation (LRR) is used to recover the noise-free data • The error is regularized by the corrupted probability of each pixel Fuzzy Logic [11] • Fuzzy logic technique is used to determine the weights to contract the singular values Homogeneity similarity based methods [12] • Median filtering is used as pre-processing • Horizontal stretching is performed by region restriction and rectangular neighbourhood Multi-frame algorithm methods [13] • Logarithmic transformation of the image is performed and find the misalignment frame • Augmented Lagrange Multiplier (ALM) is used to find the optimized low rank and sparsity ...
Article
Full-text available
Reduction of noise has a considerable effect in medical image processing and computer vision analysis. Medical images are affected by noise due to low radiation exposure, physiological sources and electronic hardware noise. This affects diagnosis quality and quantitative measurements. In this paper, optical coherence tomography images are de-noised through wavelet transform, and the wavelet threshold value is further optimised using genetic algorithm (GA). The optimal levels of wavelet decomposition and threshold correction are performed through GA. The efficacy of the proposed method is verified by comparing the results with other reported wavelet- and GA-based methods in terms of Peak-Signal-to-Noise Ratio (PSNR) parameters. The quality of the resulting image is measured through structural similarity index measure (SSIM), correlation of coefficient (COC) and edge preservation index (EPI) parameters. The improvement of the proposed approach in terms of performance parameters PSNR, COC, SSIM and EPI is respectively 2.24%, 7.9%, 17.18% and 6.32% more than the existing GA-based method considering retinal OCT image. The results indicate that the suggested algorithm effectively suppresses the speckle noise of different noise variances, and the de-noised medical image is more suitable for clinical diagnosis.
... Finally, the digital output of the camera is transferred to a personal computer (PC) where several processing algorithms such as noise cancellation, common mode term rejection, moving average filtration, and inverse fast Fourier Transformation − 1 are applied to construct the images. [9,10] In SS-OCTs, a narrow bandwidth laser is used as a light source where the central wavelength of the laser source is swept very quickly; that is, in SS-OCTs, instead of using a movable reference mirror, a laser source with variable central wavelength is adopted. [11] Performance of SS-OCTs is just similar to SD-OCTs with this difference that instead of a spectrometer arm, a detector array is used in SS-OCTs to construct the images. ...
Article
Full-text available
Background Optical coherence tomography (OCT) is a biomedical imaging technique used to achieve high-resolution images from human tissues in a noninvasive manner. Methods In this article, a practical approach is proposed for designing ultrahigh-resolution spectral-domain OCT (UHR SD-OCT) devices. At first, block diagram of a typical SD-OCT is introduced in detail. At second, internal components of each arm are introduced where the key parameters of each component are highlighted. At third, the effects of these key parameters on the overall performance of the UHR SD-OCT are investigated in a comprehensive manner. At fourth, the most important requirements of a UHR SD-OCT are explained, where suitable optical equipment is selected for each arm based on these requirements. At fifth, optical accessories as well as the electrical devices required for managing and control of the performance of a UHR SD-OCT are introduced in brief. Results Performance of the proposed device is assessed through various simulations, and finally, the implementation cost and implementation challenges are investigated in detail. Conclusions Simulation results indicate that the proposed UHR SD-OCT has acceptable axial resolution and imaging depth; hence, it is a good candidate for use in retinal applications that require UHR imaging.
... This paper introduces a novel method for OCT image enhancement, employing 2D curvelet-based K-SVD for denoising across various sub-bands. Significantly reduces speckle noise and enhances contrast in OCT B-scans [12]. ...
... Compressed sensing, filtering, and model-based algorithms [10] are just a few of the many computational strategies that have been reported for enhancing the quality of ophthalmic OCT images. An example of a compressed sensing method involves making high-SNR pictures to denoise nearby low-SNR B-scans [11]. However, this technique is not as robust in clinical settings due to the fact that it necessitates an uneven scanning signal to leisurely record high-SNR B-scans in order to generate a sparse depiction vocabulary. ...
Article
Anterior segment optical coherence tomography (AS-OCT) is a popular imaging technique that can directly visualize the anterior segment structures while inherent speckle noise severely impairs visual readability and subsequent clinical analysis. Though unpaired OCT image denoising algorithms have been developed to improve visual quality considering the limited supervised clinical data, preserving the edge structures while denoising remains challenging, especially in AS-OCT images with little hierarchy and low contrast. This work proposes an edge enhancement generative adversarial network (E 2 GAN) based contrast-aware, particularly for unpaired AS-OCT image denoising. Specifically, to improve edge-structure consistency, we design a contrast attention mechanism for exploiting diverse hierarchical knowledge from multiple contrast images and adopt particular gradient-guided speckle filtering modules with an edge preservation loss for stabilizing the network. Additionally, considering that bi-directional GANs often focus on global appearance rather than essential features, E 2 GAN adds a perceptual quality constraint into the cycle consistency. Extensive experiments validate the superiority of E 2 GAN for AS-OCT image denoising and the benefits for downstream clinical analysis. Further experiments on the synthetic retinal OCT images prove the generalization of E 2 GAN.
Article
Sweat pores are gaining recognition as a secure, reliable, and identifiable third-level fingerprint feature. Challenges arise in collecting sweat pores when fingers are contaminated, dry, or damaged, leading to unclear or vanished surface sweat pores. Optical Coherence Tomography (OCT) has been applied in the collection of fingertip biometric features. The sweat pores mapped from the subcutaneous sweat glands collected by OCT possess higher security and stability. However, speckle noise in OCT images can blur sweat glands making segmentation and extraction difficult. Traditional denoising methods cause unclear sweat gland contours and structural loss due to smearing and excessive smoothing. Deep learning-based methods have not achieved good results due to the lack of clean images as ground-truth. This paper proposes a sweat gland enhancement method for fingertip OCT images based on Generative Adversarial Network (GAN). It can effectively remove speckle noise while eliminating irrelevant structures and repairing the lost structure of sweat glands, ultimately improving the accuracy of sweat gland segmentation and extraction. To the best knowledge, it is the first time that sweat gland enhancement is investigated and proposed. In this method, a paired dataset generation strategy is proposed, which can extend few manually enhanced ground-truth into a high-quality paired dataset. An improved Pix2Pix for sweat gland enhancement is proposed, with the addition of a perceptual loss to mitigate structural distortions during the image translation process. It’s worth noting that after obtaining the paired dataset, any advanced supervised image-to-image translation network can be adapted into our framework for enhancement. Experiments are carried out to verify the effectiveness of the proposed method.
Article
Full-text available
To mitigate the noise effects without information loss at the edges of the radiological images, a well-designed preprocessing algorithm is required to assist the radiologists. This paper proposes a hybrid adaptive preprocessing algorithm that utilizes a Rudin_Osher_Fatemi (R_O_F) model for edge detection, Richardson_Lucy (R_L) algorithm for image enhancement, and block matching 3D Collaborative filtering for denoising images. The performance of the proposed method is assessed and estimated on two realistic datasets, one on chest X-ray images and the other on MRI and CT images. The proposed hybrid system verifies the data reliability of Gaussian noise-affected medical images. The simulation results show that the proposed adaptive method attains a high-value peak signal-to-noise ratio of 47.4433 dB for chest X-ray and 46.8674 dB for MRI and CT datasets, respectively, at a standard deviation value of 2. The performance analysis of the proposed scheme is further carried out using various statistical parameters, such as root-mean-square error, contrast-to-noise ratio, Bhattacharya coefficient, and edge preservation index. A comparative analysis of denoised image quality shows that the proposed system achieves better performance than several existing denoising methods.
Article
Full-text available
Retinal diseases are significant cause of visual impairment globally. In the worst case they may lead to severe vision loss or blindness. Accurate diagnosis is a key factor in the right treatment planning that can stop or slow the disease. The examination that can aid in the right diagnosis is Optical Coherence Tomography (OCT). OCT scans are susceptible to various noise effects which deteriorate their quality and as a result may impede the analysis of their content. In this paper, we propose a novel and effective method for OCT image denoising using a deep learning model trained on pairs of noisy and clean scans obtained by BM3D filtering. A comprehensive dataset of 21926 OCT scans, collected from 869 patients (1639 eyes), covering both healthy and pathological cases, was used for training and testing of the proposed scheme. The method was validated taking into account quantitative metrics concerning image quality. In addition, the proposed denoising scheme was evaluated by analyzing the impact of applying it in the eye disease classification based on Convolutional Neural Networks (CNNs) where we obtained the improvement of around 1-3 pp (percentage point). A separate dataset of 25697 scans collected from 1910 patients (2953 eyes) was used for this purpose. The conducted experiments have proved that the method can be applied as a preprocessing step in order to provide better disease classification results and can be useful in other OCT image analysis tasks. The proposed solution is much faster and perform better than the classical BM3D filter (over ninetyfold speed-up) and other related methods, especially when a big set of images needs to be processed at once. Furthermore, the use of the diverse dataset show the benefit over methods which are based on using only healthy scans for the training of the neural network.
Article
Full-text available
Recently emerging non-invasive imaging modality - optical coherence tomography (OCT) - is becoming an increasingly important diagnostic tool in various medical applications. One of its main limitations is the presence of speckle noise which obscures small and low-intensity features. The use of multiresolution techniques has been recently reported by several authors with promising results. These approaches take into account the signal and noise properties in different ways. Approaches that take into account the global orientation properties of OCT images apply accordingly different level of smoothing in different orientation subbands. Other approaches take into account local signal and noise covariance's. So far it was unclear how these different approaches compare to each other and to the best available single-resolution despeckling techniques. The clinical relevance of the denoising results also remains to be determined. In this paper we review systematically recent multiresolution OCT speckle filters and we report the results of a comparative experimental study. We use 15 different OCT images extracted from five different three-dimensional volumes, and we also generate a software phantom with real OCT noise. These test images are processed with different filters and the results are evaluated both visually and in terms of different performance measures. The results indicate significant differences in the performance of the analyzed methods. Wavelet techniques perform much better than the single resolution ones and some of the wavelet methods improve remarkably the quality of OCT images.
Article
Full-text available
Multiresolution methods are deeply related to image processing, biological and computer vision, scientific computing, etc. The curvelet transform is a multiscale directional trans-form, which allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing interest in the community of applied mathematics and signal processing over the past years. In this paper, we present a review on the curvelet transform, including its history beginning from wavelets, its logical relationship to other multiresolution multidirectional methods like contourlets and shearlets, its basic theory and discrete algorithm. Further, we consider recent applications in image/video processing, seis-mic exploration, fluid mechanics, simulation of partial different equations, and compressed sensing.
Article
Full-text available
In this paper, we make contact with the field of compressive sensing and present a development and generalization of tools and results for reconstructing irregularly sampled tomographic data. In particular, we focus on denoising Spectral-Domain Optical Coherence Tomography (SDOCT) volumetric data. We take advantage of customized scanning patterns, in which, a selected number of B-scans are imaged at higher signal-to-noise ratio (SNR). We learn a sparse representation dictionary for each of these high-SNR images, and utilize such dictionaries to denoise the low-SNR B-scans. We name this method multiscale sparsity based tomographic denoising (MSBTD). We show the qualitative and quantitative superiority of the MSBTD algorithm compared to popular denoising algorithms on images from normal and age-related macular degeneration eyes of a multi-center clinical trial. We have made the corresponding data set and software freely available online.
Article
Objective To define quantitative indicators for the presence of intermediate age-related macular degeneration (AMD) via spectral-domain optical coherence tomography (SD-OCT) imaging of older adults. Design Evaluation of diagnostic test and technology. Participants and Controls One eye from 115 elderly subjects without AMD and 269 subjects with intermediate AMD from the Age-Related Eye Disease Study 2 (AREDS2) Ancillary SD-OCT Study. Methods We semiautomatically delineated the retinal pigment epithelium (RPE) and RPE drusen complex (RPEDC, the axial distance from the apex of the drusen and RPE layer to Bruch's membrane) and total retina (TR, the axial distance between the inner limiting and Bruch's membranes) boundaries. We registered and averaged the thickness maps from control subjects to generate a map of “normal” non-AMD thickness. We considered RPEDC thicknesses larger or smaller than 3 standard deviations from the mean as abnormal, indicating drusen or geographic atrophy (GA), respectively. We measured TR volumes, RPEDC volumes, and abnormal RPEDC thickening and thinning volumes for each subject. By using different combinations of these 4 disease indicators, we designed 5 automated classifiers for the presence of AMD on the basis of the generalized linear model regression framework. We trained and evaluated the performance of these classifiers using the leave-one-out method. Main Outcome Measures The range and topographic distribution of the RPEDC and TR thicknesses in a 5-mm diameter cylinder centered at the fovea. Results The most efficient method for separating AMD and control eyes required all 4 disease indicators. The area under the curve (AUC) of the receiver operating characteristic (ROC) for this classifier was >0.99. Overall neurosensory retinal thickening in eyes with AMD versus control eyes in our study contrasts with previous smaller studies. Conclusions We identified and validated efficient biometrics to distinguish AMD from normal eyes by analyzing the topographic distribution of normal and abnormal RPEDC thicknesses across a large atlas of eyes. We created an online atlas to share the 38 400 SD-OCT images in this study, their corresponding segmentations, and quantitative measurements.
Article
Optical coherence tomography (OCT) is a promising technology, which could be used in a variety of imaging applications. However, OCT images are usually degraded by speckle noise. Speckle noise reduction in OCT is particularly challenging because it is difficult to separate the noise and the information components in the speckle pattern. In this study, a novel speckle noise reduction technique, based on robust principal component analysis (RPCA), is presented and applied to OCT images for the first time. The proposed technique gives an optimal estimate of OCT image domain transformations such that the matrix of transformed OCT images can be decomposed as the sum of a sparse matrix of speckle noise and a low-rank matrix of the denoised image. The decomposition is a unique feature of the proposed method which can not only reduce the speckle noise, but also preserve the structural information about the imaged object. Applying the proposed technique to a number of OCT images showed significant improvement of image quality.
Article
Sparse representation of images has been a recent area of growing interest. It finds applications in many problems in image processing. In this report we study a particular method of achieving sparse repre- sentation using the recently proposed K-SVD algorithm by Aharon et al. (1) and how this sparse representation framework has been extended to perform denoising, as illustrated by the authors in (2). Suitable il- lustrations along with the theory is detailed in this report to help in understanding of the intricacies of this method.
Article
Background/aims: Optical coherence tomography (OCT) is a non-invasive technique for morphological investigation of tissue. Since its development in the late 1980s it is mainly used as a diagnostic tool in ophthalmology. For examination of a highly scattering tissue like the skin, it was necessary to modify the method. Early studies on the value of OCT for skin diagnosis gave promising results. Methods: The OCT technique is based on the principle of Michelson interferometry. The light sources used for OCT are low coherent superluminescent diodes operating at a wavelength of about 1300 nm. OCT provides two-dimensional images with a scan length of a few millimeters (mm), a resolution of about 15 μm and a maximum detection depth of 1.5 mm. The image acquisition can be performed nearly in real time. The measurement is non-invasive and with no side effects. Results: The in vivo OCT images of human skin show a strong scattering from tissue with a few layers and some optical inhomogeneities. The resolution enables the visualization of architectural changes, but not of single cells. In palmoplantar skin, the thick stratum corneum is visible as a low-scattering superficial well defined layer with spiral sweat gland ducts inside. The epidermis can be distinguished from the dermis. Adnexal structures and blood vessels are low-scattering regions in the upper dermis. Skin tumors show a homogenous signal distribution. In some cases, tumor borders to healthy skin are detectable. Inflammatory skin diseases lead to changes of the OCT image, such as thickening of the epidermis and reduction of the light attenuation in the dermis. A quantification of treatment effects, such as swelling of the horny layer due to application of a moisturizer, is possible. Repeated measurements allow a monitoring of the changes over time. Conclusion: OCT is a promising new bioengineering method for investigation of skin morphology. In some cases it may be useful for diagnosis of skin diseases. Because of its non-invasive character, the technique allows monitoring of inflammatory diseases over time. An objective quantification of the efficacy and tolerance of topical treatment is also possible. Due to the high resolution and simple application, OCT is an interesting addition to other morphological techniques in dermatology.
Conference Paper
We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of elementary building blocks e.g. affine (wavelet) frames. We propose a modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual (error) at every step and thereby leads to improved convergence. We refer to this modified algorithm as orthogonal matching pursuit (OMP). It is shown that all additional computation required for the OMP algorithm may be performed recursively
Article
Multiresolution methods are deeply related to image processing, biological and computer vision, and scientific computing. The curvelet transform is a multiscale directional transform that allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing interest in the community of applied mathematics and signal processing over the years. In this article, we present a review on the curvelet transform, including its history beginning from wavelets, its logical relationship to other multiresolution multidirectional methods like contourlets and shearlets, its basic theory and discrete algorithm. Further, we consider recent applications in image/video processing, seismic exploration, fluid mechanics, simulation of partial different equations, and compressed sensing.