Segmentation of three-dimensional scenes encoded in digital
Karen M. Molonya, Conor P. McElhinneya, Bryan M. Hennellyaand Thomas J. Naughtona,b
aDepartment of Computer Science, National University of Ireland, Maynooth, Ireland
bUniversity of Oulu, RFMedia Laboratory, Oulu Southern Institute, Vierimaantie 5, 84100
This study investigates segmentation algorithms applicable to digital holography. An assessment of image seg-
mentation tecnhniques applied to intensity images of reconstructions of digital holograms is provided. Digital
holography differs from conventional imaging as 3D information is encoded. This allows depth information
to be exploited so that focusing of 3D objects, or part there of, at different depths can be achieved. In this
paper, segmentation of features is attained in microscopic and macroscopic scenes. We investigate a number
of recently proposed segmentation techniques including (i) depth from focus, (ii) active contours and (iii)
hierarchical thresholding. The influence of noise reduction on the segmentation capabilities of each of the
techniques on these scenes is demonstrated. For the macrocsopic scenes, each technique is applied before and
after speckle noise reduction is performed using a wavelet based approach. The performance of the segmen-
tation techniques on the intensity information obtained from reconstructed holograms of microscopic scenes
is also investigated before and after twin-image reduction has been applied. A comparison of the techniques
and their performances in these circumstances is provided.
Digital holography1involves the recording of an interference pattern from an object beam and a known
reference beam by an imaging sensor. Reconstruction is achieved on a computer by numerical propagation to
the original object plane. Digital holograms can be obtained in a number of ways including in-line and off-axis
architectures. In-line capture involves only one capture and while full resolution can be maintained2it requires
digital post processing. Post processing is required as the twin image, the zero order diffraction term known as
the DC term, and real image overlap and are not easily distinguishable from one another. A great advantage of
in-line microscopy is that no optical elements are required other than a camera, a laser and a pinhole. Off-axis
capture also entails only one capture and does not require the same level of post processing, as the twin image,
DC term and real image are easily seperable in the Fourier domain. However, approximately a quarter of the
resolution is maintained.3Furthermore the optical implementation is considerably more complicated involving
numerous optical elements and difficult alignments. Phase shift interferometry (PSI) is an adaption of in-line
holography where four in-line captures are combined digitally so the twin image and DC terms are removed
and full resolution is maintained.4The microscopic scenes presented here have been captured using the in-line
method and the macroscopic scenes have been recorded using the PSI method of capture.
In image processing, segmentation subdivides an image into regions. Typically the purpose of segmentation
into these regions would be to isolate features and/or feature boundaries. In digital holography however, not all
parts of a feature or an imaged object will necessarily be in focus in a given reconstruction at a specific depth.
Segmentation can be performed on a single reconstruction at a specific depth, or on multiple independently
focused reconstructions at different depths,5which, when combined can be considered a topography of the
scene. The segmentation algorithms that are considered in this paper all segment based on different criteria.
These algorithms are level set active contours (LSAC), depth from focus (DFF), and hierarchical thresholding
Further author information: KM: email@example.com; TN: firstname.lastname@example.org
Optics and Photonics for Information Processing II, edited by Abdul Ahad Sami Awwal,
Khan M. Iftekharuddin, Bahram Javidi, Proc. of SPIE Vol. 7072, 707209, (2008)
0277-786X/08/$18 · doi: 10.1117/12.796163
Proc. of SPIE Vol. 7072 707209-1
2008 SPIE Digital Library -- Subscriber Archive Copy
(HT). The LSAC technique is gradient dependant, the DFF method relies on focus information and the HT
approach is computed based on intensity.
Active contours segment scenes using dynamic curves to isolate boundaries in images. One geometric adap-
tation of active contours uses a level set function to influence the evolution of some initial curve.6The curve
can be initialized manually or automatically (e.g. just within the boundary of the image) and can evolve into
single or multiple feature boundaries. The level set function influences the evolution of the dynamic curve(s)
based on a description of the conditions of internal and external energies. Thus, a priori knowledge of the
image contents can be used to bias the evolution of the curve, as they lend more detail to the description
of the influencing conditions of internal and external energies; e.g. the prominence of noise, the feature size
relative to the image, estimation of shape of sought features and stringency on the alteration of this curve
shape. This method is currently applicable to a single plane only.
Depth from focus7is an imaging processing technique which is primarily used to extract depth information from
images. The output focus information from DFF can be used to segment a scene into object and background7
or into multiple object(s) or object(s) regions.8We first define a block size n × n, and create our focus map
by calculating a pre-defined focus measure on every overlapping block of size n×n in the reconstruction. The
background in our digital holograms can then be segmented by applying a simple threshold to the focus map.
Alternatively if the depth of focus of the scene encoded in the digital hologram is large, we can calculate a
maximum focus map from the focus maps of multiple independently focused reconstructions. This produces
a more accurate segmentation mask5but is beyond the scope of this paper.
Hierarchical thresholding9is a simple approach that deals with noisy images by generating a hierarchy of input
images, from low to high resolution. Low resolution images are blurred versions of the input image. The idea
here is that rough feature boundaries can be extracted by thresholding the lowest resolution image. Then the
corresponding neighbourhoods of those boundary pixels in the next lowest resolution image are thresholded in
the same way. This is done hierarchically until an accurate version of the original boundary has been found.
The disadvantage of this method is that it relies on features having different intensities to the background,
and some knowledge of this is required so that a threshold can be specified. The threshold can be generated
automatically if there is some information available about the intensity of features vs. background. Blurring
using a Gaussian filter can introduce false boundaries in the image, so boundaries found within five pixels of
the border are ignored.
Noise can corrupt segmentation results, in particular for methods that are gradient dependant. In the case of
macroscopic objects speckle noise is inherent in digital holography and degrades reconstructed images. Speckle
occurs when coherent light is diffused by an optically rough surface. Speckle is present for digital holography
as coherent light is used in the recording process.10Speckle evolves over time and is approximately a signal
independent multiplicative noise.11Since microscopic samples are not in general rough, it is a much more
prominent issue for macroscopic scenes that for microscopic scenes. Methods for suppression of speckle noise in
macroscopic digital hologram reconstruction have been developed.10,12–14For macroscopic objects we assess
the performance of the three segmentation algorithms, described above, before and after speckle noise has
been reduced using a stationary wavelet transform.15Speckle noise is not the only type of noise associated
with reconstructed holograms. In the case of our microscopic digital holographic images recorded using only
the in-line architecture a major obstruction of information is the existence of a twin image. The twin is a
feature of holography that contaminates a reconstruction and so can be considered noise. For a reconstruction
at a given depth, d, there exists an in focus real image of the object and an out of focus twin image of the
object. This twin image is in focus at twice the distance of that depth away, i.e. at a depth −d. Reduction
of the twin image is achieved using the solution provided by Latychevskaia et al. 200716for the microscopic
scenes. The performance of the segmentation techniques is assessed before and after twin image reduction has
This paper is broken down as follows: in Section 2 we outline the segmentation techniques applied, including
the parameters and constraints applicable for each. In Section 3 the physical configurations of the set-ups used
is provided. In Section 4 we present some of our experimental results which best demonstrate the capabilities
and inadequacies of the segmentation techniques applied. In Section 5 we explain these findings and Section
Proc. of SPIE Vol. 7072 707209-2
6 concludes this paper.
2. ALGORITHMS AND METRICS
2.1 Level set active contours (LSAC)
LSAC is a technique that was developed by Li et al. in 2006.6It is a segmentation technique that is intended
for application to a 2D scene and isolates boundaries based on gradient information. This method requires an
initial contour which can be defined manually, predicted intelligently, or generically automated. There are a
number of factors that can be considered a priori, i.e. the size of the features sought with respect to the area of
the input image and the level of detail to adhere to. The latter can be used to adapt to the extent of noise in
the image and the rigidity of the contour when adapting to shapes, which can be used to bias the evolution of
the curve. These details however can be considered a drawback of this method if the level of noise is unknown
or various features with complex and simple compositions may occur for a range of sizes in a single scene.
The success of LSAC is not hampered by a bad initial contour. However the direction of evolution, either
out from the initial contour or in from the initial contour, must be defined. Generic automation of the initial
contour defines the contour as being the rectangle several pixels within the boarder of the image. Although
this approach is effective, a good initialization can speed up the process considerably. Having a good initial
contour avoids computationally expensive processing of background areas when seeking to segment features in
a scene. Manual definition of the initial contour may be possible if features are expected, or known to reside
in a specific region of the image.
2.2 Depth from focus (DFF)
The reconstructions of our digital holograms are of size M × N. We store the intensity of the reconstruction,
at depth z, in Iz, which is then used to create our focus map. For the experiments in this paper we select
variance as our focus measure. We calculate our focus value for each pixel by calculating variance on the
n×n pixel overlapping blocks approximately centered on each pixel, and address each block with (k,l) where
k ∈ [0,M − 1],l ∈ [0,N − 1]. The focus map is defined by
Iz(x,y) − Iz(k,l)
where Iz(k,l) is defined
Where the variance is low, this indicates a background region. A threshold τ is chosen and FMap is trans-
where 0 denotes a background pixel and 1 denotes an object pixel. The binary image SMask is our segmentation
mask. Finally, we apply a mathematical morphology erosion operation (with neighborhood ?n/2? × ?n/2?)
to SMask to shrink the boundaries of the object; our use of overlapping blocks uniformly enlarges the mask.
The use of other focus measures is advisable when the digital holographic reconstructions contain pure phase
or pure amplitude objects. In these cases when the digital hologram encodes pure phase or pure amplitude
objects it is expected that the use of variance as the focus measure will result in unsuccessful segmentation.
In these circumstances, we expect that the implementation of other focus measures proven to work for these
objects would result in more accurate segmentation.17,18
if FMapz(k,l) ≥ τ
if FMapz(k,l) < τ,
Proc. of SPIE Vol. 7072 707209-3
2.3 Hierarchical thresholding (HT)
HT is a simple method that relies on a difference in intensity between the foreground and background.9An
assumption is made that target features will be visibly distinguishable from the background. A threshold,
T, is estimated so that features in the image brighter than the average intensity of the image are excluded.
This approach deals with noise by processing the image at a number of different levels. For n mask sizes
(s1,s2,...snwhere s1< s2< ...sn), there are n levels. Each level, Li, is the original image Gaussian blurred
with a neighborhood mask size si. Larger masks are necessary to compensate for high noise levels and small
masks are needed so that fine details in boundaries can be detected. Therefore an a priori estimation of the
extent of noise in the image is required. Starting with the top level, L1, i.e. the most blurred version of the
image, a boundary, B, is defined based on the gradient edge of those regions that are kept by the application of,
the threshold T. Then for the remaining n-1 mask sizes, each level, Li, is used to update B. The neighborhood
of each boundary pixel is considered in Liand hence hones in on the true boundary. When n levels have been
processed, B represents the boundary of the segmented region(s).
3. EXPERIMENTAL DIGITAL HOLOGRAPHY
The digital Gabor in-line set up that we used is described in detail by Garcia-Sucerquia et. al. in 200619
and is shown in Fig. 1. For this experiment a coherent beam is focused on a 1 µm pinhole from which a
spherical wave is emanated. The pinhole acts as a lens (microscope objective), emitting a wavefront with
a spherical phase distribution with high spatial coherence. The numerical aperture20(NA) of the pinhole
is 0.5λ/dp = 0.3925, where dp is the diameter of the pinhole and λ is the wavelength of the laser, which is
785nm. Our resolution is constrained by the numerical aperture due to the finite spatial extent of the camera,
NA = (W/2)/sqrt((W/2)2+ (z1+ z2)2), where z1is the distance from the pinhole to the sample and z2is
the distance from the sample to the CCD, as shown in Fig. 1. The resolution condition is maintained when
L < dx/M, where lateral resolution L = λ/2NA, dxis the pixel size and M is magnification. M = (z1+z2)/z1.
The distance to numerically propagate back to the scene is given by depth = −(z2(z1+ z2))/z1.
Figure 1. The in-line set up used to capture microscopic scenes. SF = spatial filter, CCD = charge coupled device
(camera). The SF creates a spherical wave. The interference of the emerging spherical wave with the wave created
passing through the object and recorded by the CCD
For our macroscopic objects we record interferograms with an optical system based on a Mach-Zehnder inter-
ferometer, see Fig. 2. The interferograms are real-valued images resulting from the interference between an
object wave and a reference wave. In our system, a linearly polarized Helium-Neon (632.8nm) laser beam is
expanded and collimated, and split into object and reference waves. The object wave illuminates an object
placed at a distance d that is selected based on the object size in order to avoid aliasing of the CCD.21Our
CCD camera has 2048 × 2032 pixels of size 7.4µm in both dimensions. We denote U0(x,y) as the complex
amplitude distribution immediately in front of the 3D object. The reference wave passes through RP1 and
RP2, and by selectively rotating the plates we can achieve four phase shift permutations. An interferogram is
recorded for every phase shift and we then use these four interferograms and a four frame PSI4,22algorithm
to compute the camera plane complex field H0(x,y), the DH. PSI is a digital holographic technique that
calculates in-line holograms free of the twin-image and dc-term.
Proc. of SPIE Vol. 7072 707209-4
Numerical reconstruction is performed using MATLAB. The recorded holograms are digitally sampled versions
of the continuous signal incident on the CCD. The Fresnel diffraction integral (Fresnel Transform) describes
the propagation of a scalar electromagnetic wavefield in the paraxial approximation. It can be numerically
approximated by either the spectral method or the direct method.23The Fresnel transform can be written
using two different integral definitions one employing one continuous Fourier transform integral and the other
employing two. By replacing the continuous coordinates in these two Fresnel representations with discrete
counterparts for sampled functions we obtain the spectral method or the direct method. The central discrete
transformation in both cases is the discrete Fourier transform. Thus, the fast Fourier transform (FFT) can
be applied23–25to calculate it. For scenes captured at short distances from the CCD, microscopic scenes, the
spectral method is preferred.23For longer distances from the sample to the CCD, macroscopic scenes, direct
integration of the Fresnel diffraction formula was applied.23The preference of one method over the other is
due to the output sampling interval being different for both methods. For the spectral method the output
sampling interval is the same as the input one and is therefore equal to the pixel size of the camera. For the
direct method the output sampling interval is directly proportional to the distance propagated and is very
small for short distances.
Figure 2. The PSI set up used to capture macroscopic scenes. M = mirror, P = polariser, NDF = Neutral Density
Filter, C = Collimator, BS = Beam Splitter and RP =Retardation Plate
The data for microscopic scenes is not plagued by speckle noise in the same way that macroscopic scenes
are. This is because the data samples are smooth at this scale. As speckle does not corrupt holographic
microscopy, shape information can be observed in the phase image by employing Zernike’s phase contrast
technique. Further information can be obtained from the intensity image. Unwrapping may be required due
to phase ambiguities which can be caused when the difference between the depth of two touching pixels is
greater that the wavelength of the light used to record the hologram. We examine the performance of the
segmentation techniques on intensity and phase information here.
We recorded holograms using the recording approaches described previously for this experiment. The seg-
mentation methods described were applied to reconstructions∗of holograms of macroscopic and microscopic
scenes. The results obtained by these techniques for our examples are compared below. From a computational
point of view however we cannot neglect a comparison of the performance of these techniques. The required
a priori knowledge and estimated average time taken to apply each of these methods, to the first example
shown, is given in table 1. The time taken for the LSAC technique is dependant on the parameters given and
∗The CPU-based convolution approach was implemented in Matlab using the fast FFTW library26on a dedicated
server equipped with a Dual Core Xeon 1.6 GHz CPU and 4 Gigabytes of RAM.
Proc. of SPIE Vol. 7072 707209-5
Figure 3. (a) The original reconstructed coin and the segmentation results obtained using (b) LSAC (c) DFF and (d)
Figure 4. (a) The reconstructed coin after speckle reduction using a SWT with a Haar mother wavelet at 5 levels of
decomposition and the segmentation results obtained on this using (b) LSAC (c) DFF and (d) HT
Proc. of SPIE Vol. 7072 707209-6
Figure 5. (a) The original intensity image of the reconstructed lego
this using (b) LSAC (c) DFF and (d) HT
?block and the segmentation results obtained on
Figure 6. (a) The intensity of the reconstructed lego
wavelet at 4 levels of decomposition and the segmentation results obtained on this using (b) LSAC (c) DFF and (d)
?block post speckle reduction using a SWT with a Haar mother
Proc. of SPIE Vol. 7072 707209-7
Figure 7. (a) The intensity of the reconstructed oil sample, corrupted by the twin image and the segmentation results
obtained on this using (b) LSAS (c) DFF and (d) HT
Figure 8. (a) The intensity of the reconstructed oil sample, post twin image reduction and the segmentation results
obtained on this using (b) LSAC (c) DFF and (d) HT
Proc. of SPIE Vol. 7072 707209-8
A priori knowledge
(i) An estimate of noise in the image, (ii) an estimate
of the feature size and (iii) an estimate of the rigidity
of the object boundary
(i) An estimate of noise so a set of mask sizes can be
chosen and (ii) an estimate of the intensity of the fea-
ture(s) in the image so a threshold can be specified
An esimate of the intensity of the feature(s) in the scene
so that a threshold can be specified
Table 1. A priori knowledge and processing times for each of the methods
74188 seconds for an initial contour of
1840×1760 pixels with 10,000 iterations
on the 2048× 2048 pixel coin image
43 second for the 2048×2048 pixel coin
image with mask sizes (21, 17, 13, 9, 7
1326 seconds for the 2048 × 2048 pixel
the content of the image. As this technique is best suited for segmentation of scenes with a clearer background
it is not surprising that the computation time for such noisy images is pronounced.
First we consider the macroscopic scenes. As discussed previously, speckle noise is inherent in reconstructions
of macroscopic holograms. Here we show that this can obstruct successful segmentation of features in these
scenes. It should be noted that the spatial resolution of the CCD that captured these holograms was 2032×2048
and the reconstruction algorithm used pads this to a 2048×2048 space. Therefore a seemingly false boundary
exists for the HT results, which is in fact the boundary of this padded region.
The first example shown here is of an Irish euro one cent coin, see Fig. 3 (a). There is prominent speckle noise
in this reconstruction and the twin image (remaining due to imperfect PSI) causes the right bottom section
of the coin to appear with less clarity than the rest of the feature. We show in the segmentation results in
Fig. 3 (b-d) that this causes problems for each of the methods, in particular for the LSAC method and the
HT method. The resulting contour obtained by LSAC cuts out this region entirely. The HT method fails to
recognise the features in the noisiest area of the scene at all. In fact, speckle noise is detected as features by
HT in the upper left section of the coin. The DFF results are also incorrect in this region. This is in contrast
to the results obtained on the same reconstruction after the application of noise reducing algorithms. In the
case shown in Fig. 4 (a) a stationary wavelet transform (SWT) has been applied using a Haar mother wavelet
at decomposition level 5.15The results for all methods improved as conveyed in 4 (b-d). The impact of the
twin image still obstructs segmentation, though the influence of speckle noise has been reduced drastically.
The LSAC method detects a boundary that is correct save for the corrupted region. The HT method correctly
identifies most of the boundary of the coin and the indented features. There is a minor improvement in the
The coin is a relatively flat feature and all elements are in focus at the same distance. Next we demonstrate
the performance of the same techniques on a deeper object. This time we consider a reconstruction of a
block is reasonably in focus. This reconstruction is less noisy than the coin example and the segmentation
results are more successful. It can be observed in Fig. 5 that the shadows cast by the fore features on the
by shadows are considered background regions by image segmentation algorithms. Also the top of the block
is not distinguished correctly by any of the methods. The LSAC detects a rough boundary surrounding the
feature, though the prominence of noise is almost as strong as the edge of the feature so the boundary detected
is a poor likeliness to the actual boundary. The DFF approach finds a straight top below the actual top of
the feature, and the HT method finds a corrupted straight line where the top of the feature is. The HT is the
only method that identifies the shadow between the bottom of the second block and the partially illuminated
block below it. The results obtained for this reconstruction after application of a SWT using a Haar mother
wavelet and 4 levels of decomposition are shown in Fig. 4. The LSAC results show a major improvement
with a much truer representation of the boundary of the feature. The DFF and HT results are marginally
improved, though as noise didn’t corrupt these results originally there was not much scope for improvement.
?block. The reconstruction depth is chosen such that for the single plane shown the entire
?block cause part of the block to be excluded in the findings. This is expected as features occluded
Now we consider a microscopic scene, a drop of oil on a micro slide. The slide is placed a distance of 2.5cm
from a 1µm circular aperture, pinhole, and 4.2cm from the CCD. We show in Fig. 7 (a) that the twin image,
Proc. of SPIE Vol. 7072 707209-9
inverted and out of focus, corrupts the image. The poor performance of the gradient based LSAC technique
and the varaince based DFF technique in this circumstance is demonstrated in Fig. 7 (b,c). The result shown
in Fig 7 (d) shows that the incremental blurring applied by the HT method successfully omits the twin-image
from the segmentation results. In Fig. 8 we demonstrate the improvement in the performance of all of the
segmentation techniques when the twin image has been reduced using a thousand iterations of the solution
presented in Latychevskaia et al. 2007.16The LSAC and DFF results are much more successful when the twin
image effect is reduced as can be seen in Fig. 8 (b,c). The result obtained using the HT method is further
improved, see Fig. 8 (d).
The negative influence of noise on the segmentation techniques outlined in this paper only highlight the
necessity to reduce or remove noise wherever possible. Depending on the capture method, the type of noise
that is most prominent should be identified and reduced. In the case of the macroscopic scenes shown here,
captured by the PSI method, the DFF and HT techniques perform well, in particular after the wavelets have
been applied for speckle reduction. The DFF technique finds the gross features and the HT is efficient for the
extraction of details. In the microscopic scene presented, the presence of the twin-image, which is a feature of
holograms captured by the in-line method, corrupts results for the DFF and LSAC techniques and only the
HT technique performs well. However once the twin image has been reduced, all three techniques segment the
drop of oil with much improved precision.
In the microscopic scenes phase contrast images have not been used as the presence of the twin image com-
plicates unwrapping and obscures unwrapped results. As the sample being considered is semi-transparent the
shape information present in the phase image would, in the absence of the twin image, be a much simpler
case for segmentation. The influence of background noise would be minimal and it is expected that the LSAC
technique, that performs poorly in the examples given here, and the HT technique would be most effective in
this case. This is in contrast to the macroscopic scenes, which are based on reflection, where it is difficult to
extract shape information from the random phase. The LSAC and HT techniques have not, to the best of our
knowledge, been applied to digital holographic phase information before.
We would like to thank the Irish Research Council for Science, Engineering and Technology, and Science
Foundation Ireland, under the National Development Plan for funding this research.
 Kreis, T., [Handbook of Holographic Interferometry], WILEY-VCH GmbH and Co.KGaA, 1 ed. (2005).
 Gabor, D., “A new microscopic principle,” Nature 161, 777–778 (1948).
 Leith, E. N. and Upatnieks, J., “Wavefront reconstruction with diffused illumination and three-
dimensional objects,” Journal of Optical Society of America 54, 1295–1301 (1964).
 Yamaguchi, I. and Zhang, T., “Phase-shifting digital holography,” Optic Letters 22, 1268–1270 (1997).
 McElhinney, C. P., McDonald, J. B., Castro, A., Frauel, Y., Javidi, B., and Naughton, T. J., “Depth-
independent segmentation of macroscopic three-dimensional objects encoded in single perspectives of
digital holograms,” Optics Letters 32, 1229–1231 (2007).
 Li, C., Xu, C., Gui, C., and Fox, M., “Level set evolution without re-initialization: A new variational for-
mulation,” Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition (CVPR05) , 10636919 (2005).
 McElhinney, C. P., Hennelly, B. M., and Naughton, T. J., “Extended focused imaging for digital holograms
of macroscopic three-dimensional objects,” Applied Optics 47, D71–D79 (2008).
 McElhinney, C. P., Hennelly, B. M., and Naughton, T. J., “Multiple object segmentation in macroscopic
three-dimensional scenes from a single perspective using digital holography,” In preparation for Applied
 Sonka, M., Hlavac, V., and Boyle, R., [Image Processing, Analysis, and Machine Vision], Brooks/Cole
Publishing Company, 2 ed. (1999).
Proc. of SPIE Vol. 7072 707209-10
 Maycock, J., Hennelly, B. M., McDonald, J. B., Frauel, Y., Castro, A., Javidi, B., and Naughton, T. J., Download full-text
“Reduction of speckle in digital holography by discrete fourier filtering,” Journal of Optical Society of
America A 24, 1617–1622 (2007).
 Lim, L. S., “Techniques for speckle noise removal,” Optical Engineering 20, 670–678 (1981).
 Quan, C., Kang, X., and Tay, C. J., “Speckle noise reduction in digital holography by multiple holograms,”
Optical Engineering 46, 115801 (2007).
 Maycock, J., McElhinney, C. P., McDonald, J. B., Naughton, T. J., Hennelly, B. M., and Javidi, B.,
“Speckle reduction in digital holography using Independent Component Analysis,” Proc. SPIE 6187,
 Ma, L., Wang, H., Jin, W., and Jin, H., “Reduction of speckle noise in the reconstructed image of digital
hologram,” Proc. SPIE 6832, 683227 (2007).
 Molony, K. M., Maycock, J., Hennelly, B. M., McDonald, J. B., and Naughton, T. J., “A comparison of
wavelet analysis techniques in digital holograms,” Proc. SPIE 6994, 699412 (2008).
 Latychevskaia, T. and Fink, H., “Solution to the twin image problem in holography,” Physical Review
Letters 98, 233901 (2007).
 Dubois, F., Schockaert, C., Callens, N., and Yourassowsky, C., “Focus plane detection criteria in digital
holography microscopy by amplitude analysis,” Optics Express 14, 5895–5908 (2006).
 Langehanenberg, P., Kemper, B., Dirksen, D., and von Bally, G., “Autofocusing in digital holographic
phase contrast microscopy on pure phase objects for live cell imaging,” Applied Optics 47, D176–D182
 Garcia-Sucerquia, J., Xu, W., Jericho, S. K., Klages, P., Jericho, M. H., and Kreuzer, H. J., “Digital
in-line holographic microscopy,” Applied Optics 45, 836–850 (2006).
 Lee, J., Yang, H., and Hahn, J., “Wavefront error measurement of high-numerical-aperture optics with a
shackhartmann sensor and a point source,” Applied Optics 46, 1411–1415 (2007).
 Matoba, O., Hosoi, K., Nitta, K., and Yoshimura, T., “Properties of digital holography based on in-line
configuration,” Optical Engineering 39, 3214–3219 (2000).
 Bruning, J. H., Herriott, D. R., Gallagher, J. E., Rosenfeld, D. P., White, A. D., and Brangaccio, D. J.,
“Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Applied Optics 13,
 Mendlovic, D., Zalevsky, Z., and Konforti, N., “Computation considerations and fast algorithms for
calculating the diffraction integral,” Journal of Modern Optics 44, 407–414 (1997).
 Schnars, U. and Juptner, W., “Direct recording of holograms by a ccd target and numerical reconstruc-
tion,” Applied Optics 33, 179–181 (1994).
 Hennelly, B. and Sheridan, J., “Generalizing, optimizing and inventing numberical algorithms for the
fractional fourier, fresnel and linear canonical transforms,” Optical Society of America 22, 917–927 (2005).
 Frigo, M. and Johnson, S. G., “The design and implementation of fftw3,” Proceedings of the IEEE 93,
Proc. of SPIE Vol. 7072 707209-11