# Realistic 3D coherent transfer function inverse filtering of complex fields.

**ABSTRACT** We present a novel technique for three-dimensional (3D) image processing of complex fields. It consists in inverting the coherent image formation by filtering the complex spectrum with a realistic 3D coherent transfer function (CTF) of a high-NA digital holographic microscope. By combining scattering theory and signal processing, the method is demonstrated to yield the reconstruction of a scattering object field. Experimental reconstructions in phase and amplitude are presented under non-design imaging conditions. The suggested technique is best suited for an implementation in high-resolution diffraction tomography based on sample or illumination rotation.

**0**Bookmarks

**·**

**179**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**Phase stepping interferometry is used to measure the size of near-cylindrical nanowires. Nanowires with nominal radii of 25 nm and 50 nm were used to test this by comparing specific measured optical phase profile values with theoretical values calculated using a wave-optic model of the Phase stepping interferometry (PSI) system. Agreement within 10% was found, which enabled nanowire radii to be predicted within 4% of the nominal value. This demonstration highlights the potential capability for phase stepping interferometry to characterize single nanoparticles of known geometry in the optical far-field.Applied Physics Letters 01/2013; 103(16):161107-161107-4. · 3.52 Impact Factor - SourceAvailable from: Matěj Týč[Show abstract] [Hide abstract]

**ABSTRACT:**Numerical refocusing can be seen as a method of compensating the defocus aberration based on deconvolution by inverse filtering [1] in digital holographic microscopy (DHM). It is well-understood in cases when a coherent (ie point and monochromatic) light source such as a collimated laser beam is used [2]. This paper extends the theory to the case of illumination by a quasi-monochromatic extended (spatially incoherent) source. Refocusing methods for spatially incoherent illumination are derived and benefits of this type of illumination are demonstrated. We have proved both theoretically and experimentally that coherent-based refocusing gives incorrect results for extended-source illumination, while results obtained using the newly derived method are correct.Optics Express 11/2013; 21(23):28258-71. · 3.53 Impact Factor - SourceAvailable from: Serge Monneret[Show abstract] [Hide abstract]

**ABSTRACT:**We describe the use of spatially incoherent illumination to make quantitative phase imaging of a semi-transparent sample, even out of the paraxial approximation. The image volume electromagnetic field is collected by scanning the image planes with a quadriwave lateral shearing interferometer, while the sample is spatially incoherently illuminated. In comparison to coherent quantitative phase measurements, incoherent illumination enriches the 3D collected spatial frequencies leading to 3D resolution increase (up to a factor 2). The image contrast loss introduced by the incoherent illumination is simulated and used to compensate the measurements. This restores the quantitative value of phase and intensity. Experimental contrast loss compensation and 3D resolution increase is presented using polystyrene and TiO<sub>2</sub> micro-beads. Our approach will be useful to make diffraction tomography reconstruction with a simplified setup.Optics Express 04/2014; 22(7):8654-71. · 3.53 Impact Factor

Page 1

Realistic 3D coherent transfer function

inverse filtering of complex fields

Yann Cotte,1,∗Fatih M. Toy,1Cristian Arfire,1Shan Shan Kou,1

Daniel Boss,2Isabelle Bergo¨ end,1and Christian Depeursinge1

1Ecole Polytechnique F´ ed´ erale de Lausanne (EPFL),

Microvision and Microdiagnostics Group (MVD), CH-1015 Lausanne, Switzerland

2Ecole Polytechnique F´ ed´ erale de Lausanne (EPFL),

Laboratory of Neuroenergetics and Cellular Dynamics (LNDC), CH-1015 Lausanne,

Switzerland

*yann.cotte@a3.epfl.ch

Abstract:

image processing of complex fields. It consists in inverting the coherent

image formation by filtering the complex spectrum with a realistic 3D

coherent transfer function (CTF) of a high-NA digital holographic micro-

scope. By combining scattering theory and signal processing, the method

is demonstrated to yield the reconstruction of a scattering object field.

Experimental reconstructions in phase and amplitude are presented under

non-design imaging conditions. The suggested technique is best suited

for an implementation in high-resolution diffraction tomography based on

sample or illumination rotation.

We present a novel technique for three-dimensional (3D)

© 2011 Optical Society of America

OCIS codes: (090.1995) Digital holography; (110.0180) Microscopy; (100.1830) Deconvo-

lution; (100.6890) Three-dimensional image processing; (180.6900) Three-dimensional mi-

croscopy; (100.5070) Phase retrieval.

References and links

1. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography

for high resolution live cell imaging,” Opt. Express 17(1), 266–277 (2009).

2. C. J. R. Sheppard and M. Gu, “Imaging by a high aperture optical-system,” J. Mod. Opt. 40, 1631–1651 (1993).

3. S. Frisken Gibson and F. Lanni, “Experimental test of an analytical model of aberration in an oil-immersion

objective lens used in three-dimensional light microscopy,” J. Opt. Soc. Am. A 8(10), 1601–1613 (1991).

4. R. Arimoto and J. M. Murray, “A common aberration with water-immersion objective lenses,” J. Microsc. 216,

49–51 (2004).

5. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University Press, 1999).

6. A. Devaney, “A filtered backpropagation algorithm for diffraction tomography,” Ultrason. Imaging 4(4), 336–350

(1982).

7. Y. Cotte, M. F. Toy, E. Shaffer, N. Pavillon, and C. Depeursinge, “Sub-Rayleigh resolution by phase imaging,”

Opt. Lett. 35, 2176–2178 (2010).

8. Y. Cotte, M. F. Toy, N. Pavillon, and C. Depeursinge, “Microscopy image resolution improvement by deconvo-

lution of complex fields,” Opt. Express 18(19), 19462–19478 (2010).

9. M. Gu, Advanced Optical Imaging Theory (Springer-Verlag, 2000).

10. A. Marian, F. Charri` ere, T. Colomb, F. Montfort, J. K¨ uhn, P. Marquet, and C. Depeursinge, “On the complex

three-dimensional amplitude point spread function of lenses and microscope objectives: theoretical aspects, sim-

ulations and measurements by digital holography,” J. Microsc. 225, 156–169 (2007).

11. Y. Cotte and C. Depeursinge, “Measurement of the complex amplitude point spread function by a diffracting cir-

cular aperture,” in Focus on Microscopy, Advanced linear and non-linear imaging, pp. TU–AF2–PAR–D (2009).

12. F. Montfort, F. Charrire, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Purely numerical compensation

for microscope objective phase curvature in digital holographic microscopy: influence of digital phase mask

position,” J. Opt. Soc. Am. A 23(11), 2944–2953 (2006).

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2216

Page 2

13. T. Latychevskaia, F. Gehri, and H.-W. Fink, “Depth-resolved holographic reconstructions by three-dimensional

deconvolution,” Opt. Express 18(21), 22527–22544 (2010).

14. P. Sarder and A. Nehorai, “Deconvolution methods for 3-D fluorescence microscopy images,” IEEE Signal Pro-

cess. Mag. 23, 32–45 (2006).

15. J. G. McNally, T. Karpova, J. Cooper, and J. A. Conchello, “Three-dimensional imaging by deconvolution mi-

croscopy,” Methods 19, 373–385 (1999).

16. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17(15),

13040–13049 (2009).

17. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast

microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. 38, 6994–7001 (1999).

18. X. Heng, X. Q. Cui, D. W. Knapp, J. G. Wu, Z. Yaqoob, E. J. McDowell, D. Psaltis, and C. H. Yang, “Characteri-

zation of light collection through a subwavelength aperture from a point source,” Opt. Express 14, 10410–10425

(2006).

19. I. Bergoend, C. Arfire, N. Pavillon, and C. Depeursinge, “Diffraction tomography for biological cells imaging

using digital holographic microscopy,” in Laser Applications in Life Sciences, SPIE vol. 7376 (2010).

20. J. Braat, “Analytical expressions for the wave-front aberration coefficients of a tilted plane-parallel plate,” Appl.

Opt. 36(32), 8459–8467 (1997).

21. S. S. Kou and C. J. Sheppard, “Imaging in digital holographic microscopy,” Opt. Express 15(21), 13,640–13,648

(2007).

22. S. S. Kou and C. J. R. Sheppard, “Image formation in holographic tomography: high-aperture imaging condi-

tions,” Appl. Opt. 48(34), H168–H175 (2009).

23. T. Colomb, J. K¨ uhn, F. Charri` ere, C. Depeursinge, P. Marquet, and N. Aspert, “Total aberrations compensation in

digital holographic microscopy with a reference conjugated hologram,” Opt. Express 14(10), 4300–4306 (2006).

24. J. A. Lock, “Ray scattering by an arbitrarily oriented spheroid. II. transmission and cross-polarization effects,”

Appl. Opt. 35(3), 515–531 (1996).

25. D. Q. Chowdhury, P. W. Barber, and S. C. Hill, “Energy-density distribution inside large nonabsorbing spheres

by using Mie theory and geometrical optics,” Appl. Opt. 31(18), 3518–3523 (1992).

26. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1968).

27. Y. Park, M. Diez-Silva, G. Popescu, G. Lykotrafitis, W. Choi, M. S. Feld, and S. Suresh, “Refractive index maps

and membrane dynamics of human red blood cells parasitized by Plasmodium falciparum,” Proc. Natl. Acad.

Sci. U.S.A. 105(37), 13730–13735 (2008).

28. Z. Kam, B. Hanser, M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, “Computational adaptive optics for live

three-dimensional biological imaging,” Proc. Natl. Acad. Sci. U.S.A. 98(7), 3790–3795 (2001).

29. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern. 9(1), 62–66

(1979).

30. J. G. McNally, C. Preza, J.-A. Conchello, and L. J. Thomas, “Artifacts in computational optical-sectioning mi-

croscopy,” J. Opt. Soc. Am. A 11(3), 1056–1067 (1994).

1. Introduction

High-resolution three-dimensional (3D) reconstruction of weakly scattering objects is of great

interest for biomedical research. Diffraction tomography has been demonstrated to yield 3D re-

fractive index (RI) distributions of biological samples [1]. For the use of such techniques in the

field of virology and cancerology, a spatial resolution in the sub-200nm range is required. Con-

sequently, experimental setups must shift to shorter wavelengths, higher numerical apertures

(NA) and steeper illumination and/or sample rotation angles. However, the scaling of resolu-

tion to high-NA systems introduces strong diffraction and aberration sensitivity [2]. The use

of microscope objectives (MO) under non-design conditions, in particular for sample rotation,

introduces additional experimental aberrations that may further degrade resolution [3,4]. Un-

fortunately, the theory of diffraction tomography cannot correct for these conditions since it is

based on direct filtering by an ideal Ewald sphere [5].

We present a novel technique that reconstructs the object scattered field using high-NA MO

under non-design imaging conditions. Opposed to classical reconstruction methods like filtered

back projection [6], our approach is based on inverse filtering by a realistic coherent transfer

function (CTF), namely 3D complex deconvolution. We expect this technique to lead to aber-

ration correction and improved resolution [7,8]. By combining the theory of coherent image

formation [9] and diffraction [5], the deduced theory enables reconstruction of object scattered

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2217

Page 3

field by inverse 3D CTF filtering. Under sample rotation, experimental evidence of this theory

ispresented. Itconfirms that the realistic 3D CTF can be directly reconstructed from a hologram

acquisition of a complex point source [10,11]. For this purpose, DHM’s feature of digital re-

focusing is exploited [12]. As simulations have shown, digital refocusing bears the capacity of

optical sectioning [13]. By regularizing the complex filter function, phase degradation by noise

amplification is suppressed as anticipated from intensity deconvolution [14]. The effectiveness

and importance of the proposed method is demonstrated with experimental applications.

Independent of the non-design imaging condition, our method serves to reconstruct the scat-

tered complex object function from a single hologram acquisition. One reconstruction does not

feature tomography but rather optical sectioning of one rotation angle measurement. However,

the reconstruction can be applied to various complex field acquisitions, which can be ultimately

used for super-resolved diffraction tomography.

2.Description of method

The proposed method consists of three major inventions. First, for inverse filtering the three-

dimensional deconvolution of complex fields is formalized by complex noise filtering. Sec-

ondly, based on single hologram reconstruction, an experimental filter function is defined.

Third, in a rigorous approach, the filtered field is used to retrieve the scattered object func-

tion.

2.1. Regularized 3D deconvolution of complex fields

For a coherently illuminated imaging system, the 3D image formation of the complex field U

is expressed as the convolution of the complex object function, called o, and the complex point

spread function (APSF), called h [9]:

???∞

where? r = (x,y,z) denotes the location vector in the object space? r1and the image space? r2as

shown in Fig. 1(a). Equation (1) can be recast into reciprocal space by a 3D Fourier transfor-

U(? r2) =

−∞o(? r1)h(? r2−? r1)dx1dy1dz1,

(1)

object plane

o(x1,y1,z1)

=δ(x1,y1,z1)

back focal plane

O(kx,ky,kz)

hologram & image plane

U(x’2,y’2,z’2)U(x2,y2,z2)

α

α

d

f2

f2

f1

f1

(a)(b)

k0

Kk

Ewald’s sphere

Fig. 1. Optical transfer of a point source in real and reciprocal space. In scheme (a), a

practicalAbbeimagingsystemwithholographicreconstruction.Inscheme(b),fullEwald’s

sphere under Born approximation in the reciprocal object plane according to Eq. (11).

mation F defined as:

F{U(? r2)} =

???∞

−∞U(? r2)exp[i2π(?k·? r2)]dx2dy2dz2.

(2)

The reciprocal space based on the free-space (index of refraction n = 1) norm of wavenumber

k with wavelength λ, relates to the spatial frequency ν and wave vector?k = (kx,ky,kz) by

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2218

Page 4

k = |?k| = 2πν =2π

λ.

(3)

According to the convolution theorem, applying Eq. (2) to Eq. (1) results in:

F{U(? r2)}

?

???

G(?k)

= F{o}

? ?? ?

O(?k)

F{h}

? ?? ?

c(?k)

.

(4)

Conventionally, the 3D Fourier transforms of U, o, and h are called G, the complex image

spectrum, O, the complex object spectrum, and, c, the coherent transfer function (CTF), as

summarized in Eq. (4). The latter is bandpass limited through h, with the maximal lateral wave

vector,

kxy,c= ksinα,

and the maximal longitudinal wave vector

(5)

kz,c= k(1−cosα),

(6)

scaled by k from Eq. (3). The angle α indicates the half-angle of the maximum cone of light

that can enter into the MO (cf. Fig. 1) and is given by its NA = nisinα (niis the immersion’s

index of refraction). Through Eq. (4), the complex image formation can be easily inverted:

o(? r1) =

???∞

−∞O(?k)exp[−i2π(?k·? r1)]dkxdkydkz= F−1

?G(?k)

c(?k)

?

.

(7)

The 3D inverse filtering can be directly performed by dividing the two complex fields G and

c. As known from intensity deconvolution [15], the inverse filtering method in the complex

domain suffers from noise amplification for small values of the denominator G(?k)/c(?k), partic-

ularly at high spatial frequencies.

As stated by Eq. (4), the recorded spectrum G(?k) is physically band limited by the CTF,

thus it can be low-pass filtered with the maximal frequency kxy,cof Eq. (5) in order to suppress

noise [8]. However, small amplitude transmission values within the band pass of the 3D CTF

may still amplify noise. The noise amplification results in peak transmission values in the de-

convolved spectrum, which add very high modulations in phase. Thus, phase information could

be degraded through amplitude noise. To reduce noise degradation effectively we propose a

threshold in the 3D CTF of Eq. (7), such as:

˜ c(?k) =

?

c

1·exp[i·arg[c]]

if |c|>τ

if |c|≤τ.

(8)

For modulus of c smaller than τ, the CTF’s amplitude is set to unity, so that its noise ampli-

fication is eliminated while its phase value still acts for the deconvolution. By controlling τ,

truncated inverse complex filtering (τ << 1) or pure phase filtering (τ=1) can be achieved.

Therefore, the deconvolution result depends on the parameter τ. Compared to standard regu-

larization 3D intensity deconvolution [14], the threshold acts as a regularization parameter in

amplitude domain while the complex valued domain is unaffected. The influence of this param-

eter is discussed in section 3.

Note that thresholding based regularization is comparable to other schemes, like gradient

based total variation regularization [16]. However, the presented method does not assume spar-

sity, as known from compressive sensing approaches. Instead, it is based on the inversion of the

coherent image formation of Eq. (1).

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2219

Page 5

2.2. 3D field reconstruction of 2D hologram

Typically, the 3D image of a specimen is acquired from a series of 2D images by refocusing the

MO at different planes [14]. In the proposed technique, however, the complex fields are pro-

vided by digital holographic microscopy (DHM) in transmission configuration [17] as shown

in Fig. 2(a).

(a)(b)

LD: 680 nm

BS

M

M

SFSF

L

CL

TLBS

CCD

MO

RS

O

R

M

M

DL

λ/2

λ/2

ND

(c)

y1

z1

z’1

θ

x1

y’1

COVERSLIP

150μm

ALU 100nm

k’0

COVERSLIP

150μm

COVERSLIP

150μm

MOUNTING MEDIUM nm

~100μm

RBC

microshpere

k0

Y1

X1

Nano-hole

Fig. 2. Experimental configuration. (a) Optical setup: LD, laser diode; BS, beam split-

ter; M, mirror; DL, delay line; SF, spatial filter; ND, neutral density filter; L, lens; TL,

tube lens; CL, condenser lens; MO, microscope objective; RS, rotatable specimen. (b) RS

with complex point source in MO design and non-design conditions. Insert: SEM image of

nano-metric aperture (Ø ≈75nm) in aluminum film at 150000× magnification. (c) RS with

objects (see section 3) in experimental conditions with incident light?k?

0along optical axis.

Thus, the amplitude A(? r2) as well as the phase Φ(? r2) of the hologram Ψ can be reconstructed

by convolution [12]:

U(? r2) = A(? r2)·exp[iΦ(? r2)]

=exp(ikdz2)

id?λ

??∞

−∞Ψ(x?

2,y?

2)exp

?

iπ

dz2λ

?(x?

2−x2)2+(y?

2−y2)?2?

dx?

2dy?

2,

(9)

where? r?

distance as shown in Fig. 1(a). Using digital refocusing, a pseudo 3D field can be retrieved by

varying the reconstruction distance dz2= d+M2z1scaled by the MO’s longitudinal magnifica-

tion of M2= (f2/f1)2. Note that opposed to MO refocusing, the physical importance of digital

refocusing is related to the camera’s distance from the system’s image plane.

2is a spatial coordinate in the hologram plane, and d is the hologram reconstruction

2.3.Experimental pseudo 3D APSF

The coherent imaging system can be experimentally characterized by a complex point source,

shown in Fig. 2(b). It consists of an isolated nano-metric aperture (Ø ≈75nm) in a thin opaque

coating (thickness=100nm) on a conventional coverslip [11]. The aperture was fabricated in

the Center of MicroNano-Technology (CMI) clean room facilities by focused ion beam (FIB)

milling in the evaporated aluminum film (thickness=100nm). For a single point object o(? r1) =

δ(? r1), the image field Uδ(? r2) is the APSF h(? r2). This approximation yields aperture diameters

Ø << dmin(dmin: limit of resolution), and its imaged amplitude and phase have been shown to

be characteristic [10,18].

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2220

Page 6

The coverslip is mounted on a custom diffraction tomography microscope [19] based on

sample rotation and transmission DHM as shown in Fig. 2(a). The setup operates at λ =680nm

and is equipped with a dry MO for long working distance of nominal NA = 0.7 and magnifi-

cation M = 100×. The sample rotation by θ, as depicted in Fig. 2, introduces non-design MO

conditions of imaging [4], consecutively discussed in section 3.1. The optical path difference

between corresponding rays in design and non-design systems can be caused by use of a cov-

erslip with a thickness or refractive index differing from that of the design system, the use of

a non-design immersion oil ni, the defocus of the object in a mismatched mounting medium

nm[3]. Consequently, a system’s defocus must be avoided through sample preparation, as dis-

cussed in section 3. Since a dry MO is used with matched coverslips, we expect the tilt to

introduce the main experimental aberration, apart from MO intrinsic aberrations.

In order to demonstrate the importance of the proposed technique to diffraction tomography

by sample rotation, experimental holograms are recorded for tilt positions as well. The digitally

reconstructed pseudo 3D APSF are shown for two positions in Figs. 3(a)–3(b), without tilt and

for θ = 15◦.

(b)

Z

Y

Y

X

1

[a.u.]

+π

[rad]

-π

0

left

right

(a)

Z

Y

Y

X

1

[a.u.]

+π

[rad]

-π

0

left

right

Fig. 3. Measurement of complex point source in MO design and MO non-design imaging

conditions (cf. Fig. 2). The experimental APSF sections in (a) yield design MO imaging

conditions (θ =0◦), whereas sections in (b) yield non-design conditions (θ =15◦). The left

side images show |h| central sections and the right images arg[h], respectively. Colorbar,

Scalebars: 2μm.

In MO design conditions, the complex field in Fig. 3(a) indicates a typical point spread

function pattern [10]. In amplitude, the diffraction pattern is similar to the Airy diffraction

pattern, derived from the Bessel function of first kind J1. The phase part oscillates at J1’s roots

with spacing λ/NA from −π to π [7]. Nonetheless, the axial sections in Fig. 3(a) are prone to a

spherical like aberration due to intrinsic aberrations [2]. The lateral sections show a good axial

symmetry, hence the absence of strong coma or astigma like aberrations.

In the case of θ = 15◦, the field in Fig. 3(b) features asymmetries of the diffraction pat-

tern in amplitude and phase. The aberration can be especially well observed in a lateral phase

distortion. Likewise, the asymmetric aberration is also expressed in the axial direction of the

APSF. The introduction of coma-like aberration [2] is due to the tilted coverslip system. Just

like above mentioned non-design conditions [3], the tilt results in optical path differences in the

experimental system, which act as an additional aberration function [20].

The dependence of the APSF’s amplitude Aδand phase Φδon the sample rotation (cf. Fig. 2)

is defined by

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2221

Page 7

h?k0(? r2) = A(s)

δ,?k0(? r2)·exp

0= (k0,x,k0,y,k0,z) in the laboratory reference frame, mean-

?

iΦ(s)

δ,?k0(? r2)

?

,

(10)

with the illumination wavevector?k?

ing relative tothe optical axis.In the demonstrated case ofsample rotation,?k?

totheopticalaxisdoesnotchange.However,theilluminationrelativetothesampledoeschange

and the incident field vector can be expressed as?k0= (0,sinθ,cosθ) in the sample frame of

reference, according to Fig. 2(b).

The complex point source technique allows registering the scattered h background illumi-

nation free. As a result, the SNR is advantageous and the required h can be directly used for

Eq. (2). The amplitude and phase of the recorded field Aδ,?k0(? r2) and Φδ,?k0(? r2) then correspond

to the scattered components A(s)

Eventually, it is the design of the complex point source (cf. Fig. 2) that guaranties a high

accuracy of the APSF with respect to realistic imaging. The complex point source is directly

located on a conventional coverslip, also used for object imaging. Thus, the APSF acquisition

does not require any modifications of the imaging system, is stable, and most importantly, it is

representative for object imaging under the same conditions. Even, for defocused objects, the

experimental APSF can be estimated by flipping the complex point source and immersion it

with approximated defocus.

0=(0,0,1)relative

δ,?k0(? r2) and Φ(s)

δ,?k0(? r2).

2.4.

If the incident illumination field on the scatterer is in direction?k0and the scattered field is

measured in direction of?k, the first-order Born approximation states that the 3D CTF is given

by the cap of an Ewald sphere defined for

Experimental 3D CTF

?K =?k−?k0,

(11)

as schematically shown in Fig. 1(b) for DHM [21]. NA limits the sphere, so that only part of the

diffracted light can be transmitted. The experimental DHM’s 3D CTF can be directly calculated

by applying Eq. (2) to Eq. (10) and is depicted in Fig. 4.

From this reconstruction, the NA can be directly measured through the subtended half-angle

α according to the cut-off frequencies of Eqs. (5) and (6). The experimental CTF includes

experimental conditions, i. e. aberrations, as well. By comparing Fig. 4(a) and Fig. 4(b) the

impact of illumination becomes apparent. Due to the sample rotation, the MO can accept higher

frequencies from one side of the aperture, while frequencies from the opposed side are cut off.

As a result, the CTF is displaced along the Ewald’s sphere [22]. Note that this displacement is a

combination of translation and rotation if the rotational center is not coincident with the sample

geometrical center.

Thus, the 3D CTF can be written as a function of?K:

c(?k−?k0) =ˆA(s)

δ(?k−?k0)·exp

?

iˆΦ(s)

δ(?k−?k0)

?

,

(12)

where a hat indicates the Fourier component in amplitudeˆA and phaseˆΦ as summarized in

Fig. 5(a). Similarly to the 3D CTF reconstruction, the three-dimensional complex spectrum

G(?k−?k0) is calculated from experimental configurations shown in Fig. 2(c).

2.5. Scattered field retrieval

In the case of transmission microscopy, the APSF in Eq. (1) is not directly convolved with the

complex object function o. According to diffraction theory [5], the total field o can be expressed

as the sum of the incident field o(i)in direction of?k0and the scattered field o(s),

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2222

Page 8

0.05

1

[a.u.]

kz

ky

kz,c

ky,c

α1=25.5º

α2=42.5º

k=2π/λ

kx

kx,c

0.05

1

[a.u.]

(b)

0.05

1

[a.u.]

kz

ky

kz,c

ky,c

α1=33º

α2=35º

k=2π/λ

kx

kx,c

0.05

1

[a.u.]

(a)

Fig. 4. Experimental 3D CTF in different imaging conditions. The experimental CTF in (a)

obtained for MO design imaging conditions (θ = 0◦), whereas the CTF depicted in (b) is

obtained for non-design conditions (θ = 15◦), according to Fig. 2. The upper row shows

the top view on the CTF and bottom row shows the side view through the CTF for kx= 0,

respectively. Colorbars.

o?k0(? r1) = o(i)

?k0(? r1)+o(s)

?k0(? r1),

(13)

where

o(s)

?k0(? r1) = A(s)

?k0(? r1)·exp

?k0(? r1) and phase Φ(s)

?

iΦ(s)

?k0(? r1)

?

,

(14)

with the scattered field amplitude A(s)

into Eq. (1) and using Eq. (7) we see that the complex deconvolution satisfies

?k0(? r1). On substituting from Eq. (13)

F?o(s)

?k0(? r1)?=

G(?k−?k0)−F{???∞

−∞o(i)

c(?k−?k0)

?k0(? r1)h?k0(? r2−? r1)dx1dy1dz1}

.

(15)

The subtracted convolution term in the numerator of Eq. (15) can be identified as c(?k −

?k0)F{o(i)

cident on the scatterer is a monochromatic plane wave of constant amplitude A(i), propagating

in the direction specified by?k0. The time-independent part of the incident field is then given by

the expression

?k0(? r1)}, the reference field of an empty field of view [23]. Suppose that the field in-

o(i)

?k0(? r1) = A(i)·exp

?

i?k0? r1

?

,

(16)

and Eqs. (15) and (16) yield

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2223

Page 9

kx

k0

kc,x

object

spectrum

A(i)

A(s)

0

kx

kc,x

ideal

transfer

functionAδ

(s)

0

(a)(b)

transmission

transmission

^

^

^

Fig. 5.Scheme ofreciprocal space. Inimage (a),a 1D coherent transferfunction asgiven by

the complex point source [see Eq. (12)]. In image (b), image’s spectrum with background

illumination?k0?= 0.

F?o(s)

?k0(? r1)?=G(?k−?k0)−ˆA(i)δ(?k−?k0)

c(?k−?k0)

.

(17)

On the other hand, according to Eqs. (13), (14), and (16) the image spectrum may be expressed

as

G(?k−?k0) =ˆA(?k−?k0)·exp

?

iˆΦ(?k−?k0)

?

=

?

ˆA(s)(?k =?k0)+ˆA(i)

ˆA(s)(?k−?k0)·exp

if?k =?k0

if?k ?=?k0

?

iˆΦ(?k−?k0)

?

,

(18)

as shown in Fig. 5(b).

Hence, substituting Eqs. (18) and (10) into Eq. (17), yields:

F?o(s)

?k0(? r1)?=

ˆA(s)(?k−?k0)·exp

ˆA(s)

?

iˆΦδ(?k−?k0)

iˆΦ(?k−?k0)

?

δ(?k−?k0)·exp

??.

(19)

Finally, the means ofˆA(s)can be normalized toˆA(s)

Preferably, in order to avoid any degradation of the image spectrum by direct subtraction,

o(s)

δ, to equalize their spectral dynamic ranges.

?k0(? r1) can alternatively be calculated by:

F?o(s)

In summary, the scattered field o(s)can be obtained for any illumination by Eq. (15) if an

experimental reference field is provided. Alternatively, under the assumption of plane wave

illumination, it can be calculated by Eqs. (19) or (20). The latter is used for processing in

section 3.

We compare this result to the Fourier diffraction theorem [5] of scattering potential F(? r1):

?k0(? r1)?=

G(?k−?k0)

F−1?h?k0(? r1)+o(i)

?k0(? r1)?=

ˆA(?k−?k0)·exp

?

iˆΦ(?k−?k0)

?

?

ˆA(s)

δ(?k−?k0)·exp

?

iˆΦδ(?k−?k0)+ˆA(i)δ(?k−?k0)

.

(20)

F?F?k0(? r1)?=i

πkzF?U(s)

?k0, recorded at plane z±, is filtered by an ideal Ewald half

(nmk)2−k2x−k2y(nm: refractive index of mounting medium), and propagated by

the latest term as known by the filtered back propagation algorithm of conventional diffraction

tomography [6]. In our case, dividing by the 3D CTF, the spectrum is inversely filtered by an

?k0(x1,y1,z1= z±)?exp?∓ikzz±?.

(21)

It states that the scattered field U(s)

?

sphere kz=

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2224

Page 10

’experimental’ Ewald sphere. Moreover, the field propagation is intrinsically included through

the z-dependent pre-factor in the reconstruction of Eqs. (9) and (10). Therefore, the product of

the right side in Eq. (21) may be approximated as

F?F?k0(? r1)?=i

πF?o(s)

?k0(? r1)?.

(22)

The main difference consists in the filter function. A multiplicative filter in Eq. (21) cannot cor-

rect for realistic imaging conditions, such as aberrations or MO diffraction. It merely acts as a

low pass filter in the shape of an Ewald sphere. By contrast, Eq. (22) implies the object function

o(s)

fields must be divided [8]. A priori, the experimental CTF is more apt for this division since

it intrinsically corrects for MO diffraction, apodization, aberrations and non-design imaging

conditions, as discussed in section 3.1.

The functionˆF(?K) is the 3D Fourier transform of the scattering potential F(? r1) derived by

the inhomogeneous Helmholz equation of the medium n(? r1), where:

?k0, which is compensated for realistic imaging conditions. In order to achieve that correction,

n(? r1) =?n2

m−FLB(? r1)/k2?1/2,

(23)

and n(? r1) is the complex refractive index. The real part of Eq. (23) is associated with refraction

while its imaginary part is related to absorption.

If one were to measure the scattered field in the far zone for all possible directions of inci-

dence and all possible directions of scattering one could determine all the Fourier components

ˆF(?K) of the scattering potential within the full Ewald limiting sphere of 2k = 4π/λ. One could

synthesize all Fourier components to obtain the approximation

FLB(? r1) =

1

(2π)3

???

?K≤2k

ˆF(?K)exp

?

i?K? r1

?

dKxdKydKz,

(24)

called the low-pass filtered approximation of the scattering potential [5] that gives rise to

diffraction tomography. Opposed to this full approach, the approximation of the partial low-

pass filtered scattering potential F in only one direction of?k0gives rise to optical sectioning, as

discussed in section 3.3.

3.Applications

Inthissection,generalimagingaspectsoftheproposedmethodandtheirimpactonphasesignal

is evaluated. Moreover, the extraction of a scattered object field is practically demonstrated to

result in optical sectioning.

3.1.Coherent imaging inversion

First, the general impact of complex deconvolution on the coherent imaging inversion under

non-design conditions is discussed. For this purpose, experimental images of non-absorbing

mono-dispersed polystyrene microspheres (nsph= 1.59, Ø ≈ 5.8μm) in water (nm,H2O= 1.33)

are recorded at a tilt angle of θ = 15◦, shown in Fig 6(a) for raw data and in Fig. 6(b) after

deconvolution.

If complex deconvolution is successful, then a number of improvements in the complex field

should be noted, accordingly indicated in Fig. 6 by regions of interests (ROI):

Background extinction: The transparent sample images are recorded with the incident light

o(i)in direction of?k0. According to Eq. (20), a DC value is added to the APSF to compen-

sate for o(i)well seen by reduced background haze in the amplitude (cf. Fig. 6 ROI-1).

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2225

Page 11

1+π

-π

0

main

[a.u.]

insert

[rad]

1

2

3

(a)(b)

insertinsert

Fig. 6. Complex fields of polystyrene microspheres in water at a tilt angle of θ = 15◦,

according to Fig. 2. The main images show the raw amplitude (a) and the deconvolved

amplitude (b) with background ROI-1 and object ROI-2 with circle Ø≈5.8μm. The inserts

in ROI-3 show the phase parts, respectively. Colorbar, Scalebar: 4μm.

Similarly, the removed background results in full 2π-dynamic range of phase oscillation

(cf. Fig. 6 ROI-3). Finally, the object ROI-2 in Fig. 6 displays improved contrast since

the objects’ edges are sharpened by the complex deconvolution.

Diffraction pattern suppression: A second motivation consists in correcting the diffraction

pattern of the MO’s APSF. This correction is in particular required for high-NA imaging

systems since the APSF diffraction pattern may result in incorrect tomographic recon-

struction in the near resolution limit range. The diffraction pattern can be observed to be

well suppressed by comparing ROI-1 in Fig. 6. As a result, the diffraction pattern of the

refractive index mismatched sphere [24,25] becomes apparent in ROI-2 of Fig. 6.

Complex aberration correction: Aberrations are intrinsic to MO, especially for high-NA

objectives [2]. Additionally, experimental MO non-design conditions may introduce

symmetric aberrations [3]. Those conditions include non-design refraction index of ni,

mismatch of niand nm, defocus of object in nm, and non-design coverslip thickness or

refraction index. Also, axially asymmetric aberrations are introduced by the sample ro-

tation [4,20]. This aberration can be recognized as the asymmetric deformation of the

diffraction pattern in the raw images (cf. Fig. 6 ROI-1 and ROI-3). As a consequence, the

raw object in Fig. 6 ROI-2 is deformed, too. However, after deconvolution in ROI-1 of

Fig. 6 the asymmetric diffraction pattern is removed. In the same manner, the phase oscil-

lation becomes equally spaced after deconvolution as shown in Fig. 6 ROI-3. Eventually,

the object can be reconstructed in a deformation-free manner in Fig. 6 ROI-2.

Note that even with an accurate APSF and an effective deconvolution algorithm, deconvolved

images can suffer from a variety of defects and artifacts described in detail in reference [15].

3.2.The impact of phase deconvolution

The source of most artifacts is due to noise amplification as mentioned in section 2.1. The

suggested 3D deconvolution of complex fields has the capacity to tune between complex and

phase deconvolution according Eq. (8). Thus, noise amplification can be excluded for τ = 1

while the phase part still leads to image correction according to the previous section. As seen

by Eq. (19), the phase deconvolution effectively acts as a subtraction of the diffraction pattern

in phase. Strictly speaking, the recorded phase is not the phase difference between object and

reference beam, but also includes the MO’s diffraction due to frequency cutoff. The coherent

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2226

Page 12

system is seen to exhibit rather pronounced coherent imaging edges, known as ’ringing’ [26].

For biological samples [27], the diffraction influence in phase may be of great importance for

the interpretation of phase signal.

Consequently, we investigate the complex deconvolution’s influence on biological samples’

phasesignal.Thesamplesarehumanredbloodcells(RBC)thathavebeenfixedbyethanol95%

and are suspended in a physiological solution (nm,sol= 1.334 at λ = 682nm). This preparation

method allows fixing the RBC directly on the coverslip surface to avoid defocus aberrations,

and a space invariant APSF may be assumed within the field of view [28]. The experimental

APSF used for processing was acquired under identical object imaging conditions, i.e. same tilt

positions with identical coverslips as shown in Fig. 2. The experimental APSF can therefore be

accurately accessed. A comparison between the raw phase images and the phase deconvolved

images is shown for two RBCs in Figs. 7(a) and 7(c).

(c)

(a)

+π/2

-π/2

(d)(b)

RAW PROCESSED

RAWPROCESSED

(f) τ=10-3

(e) τ=10-2

(d) τ=10-1

+π -π

Fig. 7. Human RBCs in phase. Images (a) and (c) show the phase images of two RBCs

at θ = 0. Unprocessed images are labeled ’RAW’ and the label ’PROCESSED’ indicates

the deconvolved phase for τ = 1. The profiles in (b) and (d) compare the according height

differences of images above, at central sections (indicated by flashes). The error bars in-

dicate the level of phase noise (≈ 0.1rad). Images (d)–(f) show the top view on RBC (c),

processed with different τ (expressed in units ofˆA(s)

δ). Colorbars, Scalebars: 4μm.

θ=14º

-π/2

+π/2

θ=10º θ=8ºθ=6ºθ=4ºθ=2º

(j)(i)(h)(l)(g)(k)

(d)(c)(b)(f)(a)(e)

x

x

RAW

PROCESSED

Fig. 8. Inclined human RBCs in phase at various tilt angles θ. Images (a)–(f) are unpro-

cessed and images (g)–(l) are pure phase deconvolved τ = 1. According to Fig. 2, the axis

of rotation ’x’ is indicated. Colorbar, Scalebars: 4μm.

The influence of the phase deconvolution can be seen directly by comparing these topo-

graphic images. Based on their shapes, RBCs are classified in different stages [27]. The raw

image in Fig. 7(c) resembles a trophozoite stage while its phase deconvolved image reveals a

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2227

Page 13

ring structure. Similarly, the processed RBC in Fig. 7(a) reveals a trophozoite stage. A more

detailed comparison is given by its central height profiles in Figs. 7(b) and 7(d), calculated by

assuming a constant refractive index of nRBC= 1.39. It shows that the deconvolved phase pro-

file follows basically the raw trend. However, similar to coherent imaging ringing, edges are

less prone of oscillations after phase deconvolution, in particular on the RBC’s edges. Thus the

impact of the complex deconvolution is to de-blur the phase signal.

However, the recovered phase signal is highly sensitive to noise, as demonstrated in

Figs. 7(d)–7(f). Decreasing value of τ, implies a decreased SNR through amplitude noise am-

plification. As a consequence, random phasers can be introduced that degrade the phase signal.

Figures 7(e) and 7(f) show that signal degradation affects most prominently RBC’s central and

border regions. For a pure retrieval of phase that is unaffected by noise amplification, τ should

not be smaller than -1dB.

In Fig. 8, measurements under non-design imaging conditions are presented. The RBC’s raw

phase measurements become strongly deformed with increasing coverslip tilt [cf. Figs. 8(a)–

8(f)]. This observation is in accordance with section 2.3 where strong coma-like aberrations

are observed. Consequently, APSFs acquired under same non-design conditions, i.e. same tilt,

can be used to effectively correct for coma aberration within the depth of field. Accordingly

processed RBC measurements successfully recover the ring stage, even at steeper tilting angles,

as demonstrated in Figs. 8(g)–8(l). Note that the phase signal is increased for steeper θ since

projection surface along y-axis decreases, as well observed in Figs. 8(g)–8(l).

3.3.Scattered object field retrieval and optical sectioning

From the intensity deconvolution point of view, pseudo 3D microscopy can be achieved by re-

duction ofout-of-focushaze [14],which meansthatthespreadofobjectsinthez-directionisre-

duced. Recently, this potential has also been demonstrated for digitally refocused 3D fields [13]

and compressive holography [16]. Optical sectioning effects are therefore intrinsic to 3D com-

plex deconvolution if τ << 1 and amplitude information is not discarded. From the inverse

filtering point of view, the z-confinement arises from the filtering by the 3D CTF as shown

in Fig. 4. To demonstrate that scattered object field retrieval features this effect, complex de-

convolution was performed on the RBC sample of the previous section 3.2 with τ determined

by CTF’s noise level. For optimal amplitude contrast, the value of τ was calculated by a his-

togram based method, Otsu’s rule, well known in intensity deconvolution [29]. The resulting

τ ≈ −3dB, lies beneath the pure phase deconvolution criterion of -1dB, but it allows effective

diffraction pattern suppression. Therefore, high amplitude contrast compromises phase signal.

The results are shown in Fig. 9.

The raw 3D field |U| in Fig. 9(a) shows the object spread along the axial direction according

to Eq. (9). As expected, the reconstruction features no axial confinement and the RBCs cannot

be recognized.

On the other hand, the scattered object field |o(s)| after truncated inverse filtering according

to Eq. (20) is depicted in Fig. 9(b). It shows that the background field and out-of-focus haze

are successfully removed. The RBCs’ edges can be identified as strong scattering objects and

the scattered field can be recognized to match in size and position the anticipated RBC values.

Although the image quality is affected by artifacts in axial elongation [30], its axial dimen-

sions match well. However, the refractive index mismatched cell membrane (nlipid> 1.4) and

mounting medium (nm,sol= 1.334 at λ = 682nm) result in strong scattering well visible in the

xz sections.

Finally, the fields related to refraction, |n|, can be reconstructed in Fig. 9(c) according to

Eq. (23). In particular, the refraction due to the strong scattered field around the RBCs allows

a good three-dimensional localization of the RBCs’ edges. Moreover, the higher refraction

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2228

Page 14

z [μm]

z [μm]

z [μm]

x [μm]

x [μm]

x [μm]

y [μm]

+1

[a.u.]

0

(a)

z [μm]

z [μm]

z [μm]

x [μm]

x [μm]

x [μm]

y [μm]

+1

[a.u.]

0

(b)

z [μm]

z [μm]

z [μm]

x [μm]

x [μm]

x [μm]

y [μm]

[a.u.]

(c)

PROCESSEDPROCESSEDRAW

Fig. 9. 3D rendered images of two human RBC at θ = 0. Images (a), shows |U| in 3D-

space in the middle row. Bottom and top rows show the sections through central RBC

positions (indicated by flashes). Accordingly, the field |o(s)| is represented in (b), and |n| in

(c) (uncalibrated levels), respectively. The RBCs’ positions are compared to an oval area of

6μm×2.5μm, based on measurements from section 3.2. Colorbars.

index due to its hemoglobin content is well visible in the xz sections. Note that these data

are reconstructed from only one hologram for a single incident angle?k0and therefore missing

angles in Eq. (24) affect the reconstruction as seen by the lateral artifacts. However, if the 3D

inverse filtering technique is combined with a multi-angle acquisition, it holds the potential of

quantitative 3D refraction index reconstruction.

4.Concluding remarks

In this paper, we have described theory, experimental aspects as well as applications of a novel

method of 3D imaging using realistic 3D CTF inverse filtering of complex fields.

Our theory connects three-dimensional coherent image formation and diffraction theory and

results in a model for object scattering reconstruction by inverse filtering. This approach is ex-

perimentally complimented by the ability to characterize the DHM setup with background free

APSF thanks to the use of a complex point source. The physical importance of the realistic

3D CTF is demonstrated with experimental data. The technique features effective correction of

background illumination, diffraction pattern, aberrations and non-ideal experimental imaging

conditions. Moreover, the regularization of the three-dimensional deconvolution of complex

fields is shown to yield reconstruction in the complex domain. Depending on the threshold,

phase de-blurring or optical sectioning is demonstrated with RBC measurements. Most essen-

tially, the capability of scattered field extraction is experimentally presented.

In conclusion, the demonstrated technique bears the potential to reconstruct object scattering

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2229

Page 15

functions under realistic high-NA imaging conditions which play a key role in high resolution

diffraction tomography.

Acknowledgments

This work was funded by the Swiss National Science Foundation (SNSF), grant 205 320 – 130

543. The experimental data used in section 3 and intended to tomographic application has been

obtained in the framework of the Project Real 3D, funded by the European Community’s Sev-

enth Framework Program FP7/2007-2013 under grant agreement n# 216105.The authors also

acknowledge the Center of MicroNano-Technology (CMI) for the cooperation on its research

facilities.

#145314 - $15.00 USD

(C) 2011 OSA

Received 4 Apr 2011; revised 29 Jun 2011; accepted 30 Jun 2011; published 8 Jul 2011

1 August 2011 / Vol. 2, No. 8 / BIOMEDICAL OPTICS EXPRESS 2230