Efficient Bayesianbased multiview deconvolution.
ABSTRACT Lightsheet fluorescence microscopy is able to image large specimens with high resolution by capturing the samples from multiple angles. Multiview deconvolution can substantially improve the resolution and contrast of the images, but its application has been limited owing to the large size of the data sets. Here we present a Bayesianbased derivation of multiview deconvolution that drastically improves the convergence time, and we provide a fast implementation using graphics hardware.

Article: Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy.
[Show abstract] [Hide abstract]
ABSTRACT: The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment.Frontiers in Cellular Neuroscience 01/2014; 8:221. · 4.18 Impact Factor  SourceAvailable from: de.arxiv.org[Show abstract] [Hide abstract]
ABSTRACT: The increasingly popular light sheet microscopy techniques generate very large 3D timelapse recordings of living biological specimen. The necessity to make large volumetric datasets available for interactive visualization and analysis has been widely recognized. However, existing solutions build on dedicated servers to generate virtual slices that are transferred to the client applications, practically leading to insufficient frame rates (less than 10 frames per second) for truly interactive experience. An easily accessible open source solution for interactive arbitrary virtual reslicing of very large volumes and time series of volumes has yet been missing. We fill this gap with BigDataViewer, a Fiji plugin to interactively navigate and visualize large image sequences from both local and remote data sources.12/2014;
Page 1
© 2014 Nature America, Inc. All rights reserved.
brief communications
nature methods  ADVANCE ONLINE PUBLICATION  ?
RL deconvolution algorithm and subsequently extended it to
multipleview geometry, yielding
f
P x
v
xv
∏
RL
ε
x
P x
(
x
v
RL
vv
r
vv
d
d( )
x
()
( ) (
x
 )
x
 )
x
f
yx
x
=
∫∫
(1)
yxyxx
rrv
v V
f
+
=
1( )( )( )
(2)
where ψ r(ξ) denotes the deconvolved image at iteration r and
φv(xv) denotes the input views, both as functions of their respec
tive pixel locations ξ and xv, whereas P(xvξ) denotes the indi
vidual PSFs (Supplementary Note 1). Equation (1) denotes a
classical RL update step for one view; equation (2) illustrates
the combination of all views into one update of the deconvolved
image (Supplementary Video 1). In contrast to the maximum
likelihood (ML) EM5,13 that combines RL updates by addition,
equation (2) suggests a multiplicative combination. We proved
that equation (2), just as the MLEM5,13 algorithm, converges
to the ML solution (Supplementary Note 2). The ML solution
is not necessarily the correct solution if disturbances such as
noise or misalignments are present in the input images (Fig. 2).
Importantly, previous extensions to multiple views5–10 assume
individual views to be independent observations (Supplementary
Fig. 2). Assuming independence between two views implies
that by observing one view, nothing can be learned about the
other view. We showed that this independence assumption is not
required to derive equation (2) (Supplementary Note 3). Our
solution represents, to our knowledge, the first complete
derivation of RL multiview deconvolution based on probability
theory and Bayes’ theorem.
As we do not need to consider views to be independent, we
next asked whether the conditional probabilities describing the
relationship between two views can be modeled and used to
improve convergence behavior (Supplementary Figs. 1 and 3 and
Supplementary Notes 3 and 4). If we assume that a single pho
ton is observed in the first view, the PSF of this view and Bayes’
theorem can be used to assign a probability to every location
in the deconvolved image having emitted this photon (Fig. 1b).
On the basis of this probability distribution, the PSF of the second
view directly yields the probability distribution describing where
to expect a corresponding observation for the same fluorophore
in the second view (Fig. 1b). Thus, we argue that it is possible
to compute an approximate image (‘virtual’ view) of one view
from another view provided that the PSFs of both views are
known (Fig. 1c).
We used these virtual views to perform intermediate
update steps at no additional computational cost, decreasing
efficient bayesianbased
multiview deconvolution
Stephan Preibisch1–4, Fernando Amat2,
Evangelia Stamataki1, Mihail Sarov1, Robert H Singer2–4,
Eugene Myers1,2 & Pavel Tomancak1
Lightsheet fluorescence microscopy is able to image large
specimens with high resolution by capturing the samples from
multiple angles. multiview deconvolution can substantially
improve the resolution and contrast of the images, but its
application has been limited owing to the large size of the data
sets. here we present a bayesianbased derivation of multiview
deconvolution that drastically improves the convergence time,
and we provide a fast implementation using graphics hardware.
Modern lightsheet microscopes1–3 acquire images of large,
developing specimens with high temporal and spatial reso
lution typically by imaging them from multiple directions
(Fig. 1a). Deconvolution uses knowledge about the optical system
to increase spatial resolution and contrast after acquisition.
An advantage unique to lightsheet microscopy, particularly
the selectiveplane illumination microscopy (SPIM) variant, is
the ability to observe the same location in the specimen from
multiple angles, which renders the illposed problem of decon
volution more tractable4–10.
RichardsonLucy (RL) deconvolution11,12 (Supplementary
Note 1) is a Bayesianbased derivation resulting in an iterative
expectationmaximization (EM) algorithm5,13 that is often cho
sen for its simplicity and performance. Multiview deconvolution
has previously been derived using the EM framework5,9,10; how
ever, the convergence time of the algorithm remains orders of
magnitude longer than the time required to record the data. We
addressed this problem by deriving an optimized formulation of
Bayesianbased deconvolution for multipleview geometry that
explicitly incorporates conditional probabilities between the
views (Fig. 1b,c and Supplementary Fig. 1) and combining it
with ordered subsets EM (OSEM)6 (Fig. 1d and Supplementary
Fig. 2), achieving substantially faster convergence (Fig. 1d–f).
Bayesianbased deconvolution models images and point spread
functions (PSFs) as probability distributions. The goal is to estimate
the most probable underlying distribution (deconvolved image)
that best explains all observed distributions (views) given their
conditional probabilities (PSFs). We first rederived the original
1Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany. 2Janelia Farm Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia,
USA. 3Department of Anatomy and Structural Biology, Albert Einstein College of Medicine, Bronx, New York, USA. 4Gruss Lipper Biophotonics Center, Albert Einstein
College of Medicine, Bronx, New York, USA. Correspondence should be addressed to S.P. (preibischs@janelia.hhmi.org) or P.T. (tomancak@mpicbg.de).
Received 8 July 2013; accepted 6 MaRch 2014; published online 20 apRil 2014; doi:10.1038/nMeth.2929
Page 2
© 2014 Nature America, Inc. All rights reserved.
?  ADVANCE ONLINE PUBLICATION  nature methods
brief communications
the computational effort approximately twofold (Fig. 1d and
Supplementary Note 4). The multiplicative combination (equa
tion (2)) directly suggests a sequential approach, wherein each RL
update (equation (1)) is directly applied to ψ r(ξ) (Supplementary
Fig. 2 and Supplementary Note 5). This sequential scheme is
equivalent to the OSEM6 algorithm and results in a 13fold
decrease in convergence time. This gain increases linearly with the
number of views6 (Fig. 1d and Supplementary Fig. 4). To further
reduce convergence time, we introduced ad hoc simplifications
(optimizations I and II) for the estimation of conditional probabil
ities that achieve up to 40fold improvement compared to decon
volution methods that assume view independence (Fig. 1d–f,
Supplementary Figs. 4 and 5 and Supplementary Notes 6 and 7).
The new algorithm also performs well in the presence of noise
and imperfect PSFs (Supplementary Figs. 6–8). If the input views
show a very low signaltonoise ratio (SNR), atypical for SPIM,
the speedup is preserved, but the quality of the deconvolved image
is reduced. Our Bayesianbased derivation does not assume a
specific noise model, but it is in practice robust with respect to
Poisson noise, which is the dominating source of noise in
lightsheet microscopy acquisitions.
We compared the performance of our method with that of
previously published multiview deconvolution algorithms5–10 in
terms of convergence behavior and run time on the central process
ing unit (CPU) (Figs. 1e,f and 2d and Supplementary Figs. 4b
and 9a,b). For typical SPIM multiview scenarios consisting of
around seven views with a high SNR, our method requires seven
fold fewer iterations and is at least threefold faster than OSEM6,
scaled gradient projection (SGP)8 and maximum a posteriori
with Gaussian noise (MAPG)7. At the same time our optimiza
tion is able to improve the image quality of real and simulated
data sets compared to MAPG7 (Fig. 2e,f and Supplementary
Fig. 9c–h). A further speedup of threefold and reduced mem
ory consumption is achieved by using our CUDA (Compute
Unified Device Architecture) implementation (Supplementary
Fig. 10g). Moreover, our approach is capable of dealing with
partially overlapping acquisitions typical in multiview imaging
(Supplementary Fig. 10 and Online Methods).
In order to evaluate our algorithm on realistic threedimensional
(3D) multiview image data, we simulated a groundtruth data set
resembling a biological specimen (Fig. 2a). We next simulated
image acquisition in a SPIM microscope from multiple angles by
applying signal attenuation across the field of view, convolving
the data with the PSF of the microscope, simulating the
multiview optical sectioning and using a Poisson process to generate
the final pixel intensities (Fig. 2b and Online Methods). We
deconvolved the generated multiview data (Fig. 2c) using our
algorithm with and without regularization (regularization adds
smoothness constraints to the deconvolution process to achieve a
more plausible solution for this illposed problem) and compared
the results to the contentbased fusion14 and the MAPG7 decon
volution (Fig. 2d–f). Our algorithm reached optimal reconstruc
tion quality faster (Fig. 2d) and introduced fewer artifacts than
MAPG7 (Fig. 2e,f and Supplementary Videos 2 and 3). Tikhonov
regularization15 was required to converge to a reasonable result
under realistic imaging conditions (Fig. 2d–f).
We applied our deconvolution approach to multiview SPIM
acquisitions of Drosophila melanogaster and Caenorhabditis
d
1357911
Bayesian based
Efficient Bayesian based
Efficient Bayesian based (OSEM)
Optimization I (OSEM)
Optimization II (OSEM)
Number of views
13×
40×
18×
2×
1,000
100
Computation time of
Java implementations (s)
10
1
e
2468 1012
Number of views
Computation time (s)
100
10
1
f
24681012
OSEM (IDL)
SGP (IDL)
Bayesian based (OSEM, Java)
Eff. Bayesian based (OSEM, Java)
Optimization I (OSEM, Java)
Optimization II (OSEM, Java)
MAPG (Java)
Number of views
Number of iterations
1,000
100
10
c
Observed view 1
�1
Underlying image
�
50 µm
PSF 1
Observed view 2
�2
‘Virtual’ view 2
�2
v
PSF 2
‘Virtual’ PSF
Light
sheet
y
z
x
20×/1.0
a
Water
dipping
objective
Sample
(in 1% agarose,
movement in x,y,z
rotation around y)
Sample
chamber
(water
filled)
b
0
1
PSF 1
assigns
probability
to each pixel
PSF 2
Underlying
image �
01
23
567
8
0
12
35
46
78
78
Observed
view �
‘Virtual’
view �v
Detected photon
(e.g., pixel location = 4)
figure ?  Principles and performance. (a) Basic layout of a lightsheet microscope capable of multiview acquisitions. (b) Illustration of ‘virtual’ views.
A photon detected at a certain location in a view was emitted by a fluorophore in the sample; the PSF assigns a probability to every location in the
underlying image having emitted that photon. Consecutively, the PSF of any other view assigns to each of its own locations the probability to detect a
photon corresponding to the same fluorophore. (c) Example of an entire virtual view computed from observed view 1 and the knowledge of PSF 1 and PSF 2.
(d) Convergence time of the different Bayesianbased methods. We used a known groundtruth image (supplementary fig. 5) and let all variations converge
until they reached precisely the same quality. The increase in computation time for an increasing number of views of the combined methods (black) is due to the
fact that with an increasing number of views, more computational effort is required to perform one update of the deconvolved image (supplementary fig. 4)
(e) Convergence times for the same groundtruth image of our Bayesianbased methods compared to those of other optimized multiview deconvolution
algorithms5–8. The difference in computation time between Java implementations and IDL implementations, OSEM6 and SGP8, results in part from
nonoptimized IDL code. (f) Corresponding number of iterations for our algorithm and other optimized multiview deconvolution algorithms.
Page 3
© 2014 Nature America, Inc. All rights reserved.
nature methods  ADVANCE ONLINE PUBLICATION  ?
brief communications
elegans embryos (Fig. 3a–e). We achieved a substantial increase
in contrast as well as resolution with respect to the contentbased
fusion14 (Fig. 3b and Supplementary Fig. 11); only a few iterations
were required and computation times were typically in the range of
a few minutes per multiview acquisition (Supplementary Table 1).
We applied the deconvolution to a fourview acquisition of a fixed
C. elegans in larval stage 1 (L1) expressing GFPtagged lamin
(LMN1–GFP) labeling the nuclear lamina and stained for
DNA with Hoechst (Fig. 3f,g). Multiview deconvolution
improved contrast and resolution compared to the input data
and enabled unambiguous segmentation of nuclei in problem
atic areas of the nervous system16 (Supplementary Videos 4–7).
The algorithm dramatically improved multiview data acquired
with OpenSPIM17 (Supplementary Fig. 12), and its effi
ciency makes it applicable to spatially large multiview data sets
(Supplementary Fig. 13) and to processing of longterm time
lapses from the Zeiss Lightsheet Z.1 (Supplementary Videos 8–11
and Supplementary Table 1).
Multiview deconvolution increases contrast in SPIM data after
acquisition, complementary to hardwarebased contrast enhance
ment achieved by digital scanned laser lightsheet microscopy
(DSLMSI)18 (Supplementary Fig. 14). Moreover, multiview
deconvolution produced superior results when comparing an
acquisition of the same sample with SPIM and a twophoton
microscope (Supplementary Fig. 15). Finally, the benefits
of the multiview deconvolution approach are not limited to
SPIM, as illustrated by the deconvolved multiview spinning disc
confocal microscope acquisition of a C. elegans in L1 stage14
(Supplementary Fig. 16).
A major obstacle for widespread application of deconvolution
approaches to multiview lightsheet microscopy data is the lack of
usable and scalable multiview deconvolution software. Therefore,
we implemented our fast converging algorithm as a Fiji19 plugin
taking advantage of ImgLib2 (ref. 20) and GPU processing (http://
fiji.sc/MultiView_Deconvolution). The only free parameter of
the method that must be chosen by the user is the number of
iterations for the deconvolution process. We facilitate this choice
by providing a debug mode allowing the user to inspect all inter
mediate iterations and identify optimal tradeoff between qual
ity and computation time. Our Fiji19 implementation synergizes
with other related plugins and provides an integrated solution
for the processing of multiview lightsheet microscopy data of
arbitrary size.
methods
Methods and any associated references are available in the online
version of the paper.
Note: Any Supplementary Information and Source Data files are available in the
online version of the paper.
acknowLedgments
We thank T. Pietzsch (Max Planck Institute of Molecular Cell Biology and Genetics
(MPICBG)) for helpful discussions, proofreading and access to his unpublished
software; N. Clack, F. Carrillo Oesterreich and H. BowneAnderson for discussions;
N. Maghelli for twophoton imaging; P. Verveer (MPI Dortmund) for source
iii iii
ix
xii xix
vi
vii viii
viv
f
View 1
AttenuationConvolution Acquisition (+noise)
View 1View 1 SNR = 25
b
viii
ii
Input for deconvolution (7 views)
View 1View 3
ca
Common for all views
Simulated ground truth
i
vii
Rotationaxis view
Lateral (view 1)
Contentbased fusion
iii
ix
x
xi
xii
v
vi
iv
MAPGOpt. II (–reg)Opt. II (+reg)
Rotationaxis view
Lateral (view 1)e
1.0
0.9
0.8
300
0
02550
Iterations
70100
Crosscorrelation with ground truth
Comp. time
(s)
d
MAPG
Opt. II (OSEM)
Opt. II (OSEM), � = 0.004
MAPG
Opt. II (–reg)
Opt. II (+reg)
figure ?  Deconvolution of simulated 3D multiview data. (a) Left, 3D rendering of a computergenerated volume resembling a biological specimen.
The red outlines mark the wedge removed from the volume to show the content inside. Right, sections through the generated volume in the lateral
direction (as seen by the SPIM camera, top) and along the rotation axis (bottom). (b) Same slices as in a with illumination attenuation applied (left),
convolved with a PSF of a SPIM microscope (center) and simulated using a Poisson process (right). The bottom right panel shows the unscaled simulated
lightsheet sectioning data along the rotation axis. (c) Slices from views 1 and 3 of the seven views generated from a by applying processes pictured
in b and rescaling to isotropic resolution. These seven volumes are the input to the fusion and deconvolution algorithms quantified in d and visualized
in e. (d) Crosscorrelation of deconvolved and groundtruth data as a function of the number of iterations for MAPG7 and our algorithm with and without
regularization (reg). The inset compares the computation (comp.) time. (Both algorithms were implemented in Java to support partially overlapping
data sets; supplementary fig. ?0). (e) Slices equivalent to c after contentbased fusion14 (first column), MAPG7 deconvolution (second column), our
approach without regularization (third column) and with regularization15 (fourth column; Tikhonov15 regularization parameter l = 0.004). (f) Areas
marked by boxes in a,c,e at higher magnification. Note the increased artificial ring patterns in MAPG7.
Page 4
© 2014 Nature America, Inc. All rights reserved.
4  ADVANCE ONLINE PUBLICATION  nature methods
brief communications
code and helpful discussions; M. Weber for imaging the Drosophila time series;
S. Jaensch for preparing the C. elegans embryo; J.K. Liu (Cornell University)
for the LW698 strain; S. Saalfeld for help with 3D rendering; P.J. Keller for
supporting F.A. and for the DSLMSI data set; A. Cardona for access to his
computer; and Carl Zeiss Microimaging for providing us with the SPIM prototype.
S.P. was supported by MPICBG in P.T.’s lab, Howard Hughes Medical Institute
(HHMI) in E.M.’s lab and the Human Frontier Science Program (HFSP) Postdoctoral
Fellowship LT000783/2012 in R.H.S.’s lab, with additional support from US
National Institutes of Health (NIH) GM57071. F.A. was supported by HHMI in
P.J. Keller’s lab. E.S. and M.S. were supported by MPICBG. R.H.S. was supported
by NIH grants GM057071, EB013571 and NS083085. E.M. was supported by
HHMI and MPICBG. P.T. was supported by The European Research Council
Community’s Seventh Framework Program (FP7/20072013) grant agreement
260746 and the HFSP Young Investigator grant RGY0093/2012. M.S., E.M. and
P.T. were additionally supported by the Bundesministerium für Bildung und
Forschung grant 031A099.
author contributions
S.P. and F.A. derived the equations for multiview deconvolution.
S.P. implemented the software and performed all analysis, and F.A. implemented
the GPU code. E.S. generated and imaged the H2AvmRFPruby fly line.
M.S. prepared, and M.S. and S.P. imaged, the C. elegans L1 sample. S.P. and P.T.
conceived the idea and wrote the manuscript. R.H.S. provided support
and encouragement, E.M. and P.T. supervised the project.
comPeting financiaL interests
The authors declare no competing financial interests.
reprints and permissions information is available online at http://www.nature.
com/reprints/index.html.
1. Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, E.H.K.
Science ?05, 1007–1009 (2004).
2. Keller, P.J., Schmidt, A.D., Wittbrodt, J. & Stelzer, E.H.K. Science ???,
1065–1069 (2008).
3. Truong, T.V., Supatto, W., Koos, D.S., Choi, J.M. & Fraser, S.E.
Nat. Methods 8, 757–760 (2011).
4. Swoger, J., Verveer, P., Greger, K., Huisken, J. & Stelzer, E.H.K.
Opt. Express ?5, 8029–8042 (2007).
5. Shepp, L.A. & Vardi, Y. IEEE Trans. Med. Imaging ?, 113–122 (1982).
6. Hudson, H.M. & Larkin, R.S. IEEE Trans. Med. Imaging ??, 601–609
(1994).
7. Verveer, P.J. et al. Nat. Methods 4, 311–313 (2007).
8. Bonettini, S., Zanella, R. & Zanni, L. Inverse Probl. ?5, 015002 (2009).
9. Krzic, U. MultipleView Microscopy with LightSheet Based Fluorescent
Microscope. PhD thesis, Univ. Heidelberg (2009).
10. TemerinacOtt, M. et al. IEEE Trans. Image Process. ??, 1863–1873 (2012).
11. Richardson, W.H. J. Opt. Soc. Am. 6?, 55–59 (1972).
12. Lucy, L.B. Astron. J. 79, 745–754 (1974).
13. Dempster, A.P., Laird, N.M. & Rubin, D.B. J. R. Stat. Soc. Series B Stat.
Methodol. ?9, 1–38 (1977).
14. Preibisch, S., Saalfeld, S., Schindelin, J. & Tomancak, P. Nat. Methods 7,
418–419 (2010).
15. Tikhonov, A.N. & Arsenin, V.Y. Solutions of IllPosed Problems (Winston, 1977).
16. Long, F., Peng, H., Liu, X., Kim, S. & Myers, E. Nat. Methods 6, 667–672
(2009).
17. Pitrone, P.G. et al. Nat. Methods ?0, 598–599 (2013).
18. Keller, P.J. et al. Nat. Methods 7, 637–642 (2010).
19. Schindelin, J. et al. Nat. Methods 9, 676–682 (2012).
20. Pietzsch, T., Preibisch, S., Tomancak, P. & Saalfeld, S. Bioinformatics ?8,
3009–3011 (2012).
a
bdeg
cf
xyxzyz
yz
xz
i
i
ii
iii
iii
Content based
Deconvolved
1.0
0.8
0.6
0.4
01020
Position on line (µm)
3040
Intensity normalized over line
0.2
0
iiiv
iv
20 µm100 µm
15 µm
10 µm
10 µm20 µm100 µm
figure ?  Application to biological data. (a) Comparison of reconstruction results using contentbased fusion14 (top row) and multiview deconvolution
(bottom row) on a fourcell–stage C. elegans embryo expressing a PH domain–GFP fusion marking the membranes. Dotted lines mark plots shown
in b; white arrowheads mark PSFs of a fluorescent bead before and after deconvolution. (b) Line plot through the volume along the rotation axis
(yz, contrast locally normalized). This orientation typically shows the lowest resolution of a fused data set in lightsheet acquisitions, as all input views
are oriented axially (supplementary fig. ??). SNR is substantially enhanced; arrowheads mark points illustrating increased resolution. (c,d) Cut planes
through a blastodermstage Drosophila embryo expressing HisYFP in all cells. (e) Magnified view on parts of the Drosophila embryo. The left panel is a
view in lateral orientation of one of the input views; the right panel shows a view along the rotation axis characterized by the lowest resolution. (f,g)
Comparison of deconvolution and input data of a fixed L1 C. elegans larva expressing LMN1–GFP (green) and stained with Hoechst (magenta). (f) Single
slice through the deconvolved data set; arrowheads mark four locations of transversal cuts shown below. The cuts compare two orthogonal input views
(0°, 90°) with the deconvolved data. No input view offers high resolution in this orientation approximately along the rotation axis. (g) The left box in
the first row shows a random slice of a view in axial orientation (worst resolution). The second row shows a view in lateral orientation (best resolution).
The third row shows the corresponding deconvolved image. The right boxes each show a slice through the nervous system. The alignment of the C.
elegans L1 data set was refined using nuclear positions (Online Methods). The C. elegans embryo (a,b) and the Drosophila embryo (d,e) are each one time
point of a time series (none of the other time points is used in this paper). The C. elegans L1 larva (f,g) is an individual acquisition of one fixed sample.
Page 5
© 2014 Nature America, Inc. All rights reserved.
doi:10.1038/nmeth.2929
nature methods
onLine methods
Derivations and proof. The efficient Bayesianbased multi
view deconvolution is an extension of the classical Richardson
Lucy11,12 deconvolution, which is based on probability theory and
Bayes’ theorem. We rederive the singleview Bayesianbased
deconvolution and extend it to multiple views (Supplementary
Note 1), and prove the convergence of our new derivation to
the maximumlikelihood solution (Supplementary Note 2).
We show that the Bayesianbased multiview deconvolution
can be derived without assuming independence of the input
views (Supplementary Note 3) and that the conditional prob
abilities can subsequently be incorporated into the derivation
using ‘virtual’ views (Fig. 1c, Supplementary Figs. 1 and 2 and
Supplementary Note 4). Finally, we discuss further optimizations
(Supplementary Notes 5 and 6) and perform extensive bench
marks and comparisons (Supplementary Figs. 4–9 and 17 and
Supplementary Note 7).
Multiview registration and PSF estimation. Prerequisite for
multiview deconvolution of lightsheet microscopy data are
precisely aligned multiview data sets and estimates of point spread
functions (PSFs) for all views. We exploit the fact that for the
purposes of registration we include subresolution fluorescent
beads into the rigid agarose medium in which the specimen is
embedded. The beads are initially used for multiview registration
of the SPIM data14 and subsequently to extract the PSF for each
view for the purposes of multiview deconvolution. We average
the intensity of PSFs for each view for all the beads that were
identified as corresponding during registration, yielding a precise
measure of the PSF for each view under the specific experimental
condition. This synergy of registration and deconvolution ensures
realistic representation of PSFs under any imaging condition.
Alternatively, simulated PSFs or PSFs measured by other means
can be provided as inputs to the deconvolution algorithm.
Multiview deconvolution and other optical sectioning micros
copy. In order to better characterize the gain in resolution and
contrast of multiview deconvolution, several experiments and
comparisons were performed. We compared a SPIM multiview
acquisition to a singleview twophoton microscopy acquisition of
the same sample (Supplementary Fig. 15). The fixed Drosophila
embryo stained with Sytox green was embedded in agarose and
first imaged using a 20×/0.5NA (numerical aperture) water
dipping objective in the Zeiss SPIM prototype. After acquisition
the agarose was cut, and the same sample was imaged using a
twophoton microscope and a 20×/0.8NA air objective. The
data sets were aligned using the fluorescent beads visible in both
the SPIM and twophoton acquisitions. The SPIM data set was
reconstructed using contentbased fusion14 and multiview decon
volution and was compared to the twophoton stack as well as
the RichardsonLucy singleview deconvolution11,12 of the
twophoton acquisition (Supplementary Fig. 15). Although two
photon microscopy is able to detect more photons in the center
of the embryo, the multiview deconvolution shows substantially
better resolution and coverage of the sample.
Multiview deconvolution can principally be applied to any
optical sectioning microscope that is capable of sample rota
tion (Supplementary Fig. 16). We acquired a multiview data
set using spinning disc confocal microscopy and a selfbuilt
rotational device14. We compared the quality of one individ
ual input stack with the multiview deconvolution and the RL
singleview deconvolution11,12 of this stack. Although one view
completely covers the sample, it is obvious that the multiview
deconvolution clearly improves the resolution compared to the
singleview deconvolution (Supplementary Fig. 16d).
Gain in resolution due to multiview deconvolution. To be able
to quantify the gain in resolution, we analyzed images of fluores
cent beads embedded in agarose (Supplementary Fig. 11). We
extracted all corresponding fluorescent beads from seven input
views, after multiview fusion14 and after multiview deconvolu
tion. Comparing the input views and the multiview fusion, it
becomes apparent that the multiview fusion14 reduces resolution
in all dimensions except compared to the axial resolution of a
single input view. On the other hand, the multiview deconvolution
increases resolution in all dimensions compared to the multiview
fused data. The multiview deconvolution achieves almost iso
tropic resolution in all dimensions comparable to the resolution
of each input stack in the lateral direction.
Partially overlapping multiview data sets. In practical multi
view deconvolution scenarios, where large samples are acquired,
individual views often cover only some parts of the sample
(Fig. 3c–e and Supplementary Figs. 9 and 12–15). The sequential
update strategy (OSEM6) intrinsically supports partially overlap
ping data sets as it allows updating only parts of the deconvolved
image using subsets of the input data. It is, however, necessary to
achieve a balanced update for all pixels of the deconvolved image
(Supplementary Fig. 10a–f).
Therefore, a weight image wv(ξ) is computed for each input
view. It consists of a blending function returning 1 in central parts
of a view; close to the boundaries, weights are decreasing from
1 to 0 following a cosine function and thus avoiding artifacts at
image borders. By default, the sum of all weights for each pixel
over all views is normalized, Σv ∈Vwv(ξ) = 1, providing a balanced
update of all pixels (Supplementary Fig. 10a,b). For each sequen
tial update v ∈V contributed by one view v, the weight at every
pixel location defines the fraction of the RichardsonLucy11,12
update that is applied to the deconvolved image
yxyx=
( )
xx
rr
v
v
RL
wf
+
1( )( ) ( )
(3)
Normalizing the sum of weights to 1 is, however, equivalent
to not using OSEM6 in terms of performance (Supplementary
Fig. 10f). In order to benefit from the OSEM6 speedup, the weights
have to be summed to values greater than 1. At the same time,
individual weights for each view must be smaller or equal to 1
as the Bayesianbased iterative deconvolution becomes unstable
otherwise. The OSEM6 speedup that can be achieved is therefore
dependent on the coverage of the deconvolved image by input
views (Supplementary Fig. 10b–f). Choosing this number too
high will lead to an uneven deconvolution, i.e., some parts of the
sample will be more deconvolved than others (Supplementary
Fig. 10b–d). In most cases the minimal number of overlapping
views (Supplementary Fig. 10c) will provide a reasonable tradeoff
between speedup and uniformity. Some areas close to the boundaries
Page 6
© 2014 Nature America, Inc. All rights reserved.
doi:10.1038/nmeth.2929
nature methods
of the output image might still be less deconvolved in case they
map to areas in the input views that are subject to the cosine
blending function. However, those areas close to the boundaries
in the input views typically contain only background.
In order to facilitate the choice of a reasonable number of
overlapping data sets for a given acquisition, the Fiji19 plugin
offers the option to output an image containing the number
of contributing views at every pixel in the deconvolved image
(Supplementary Fig. 10e). This also gives hints on how to adjust
the imaging strategy regarding the number of views, size of stacks
and their overlap. Please note that for smaller or more trans
parent specimens, data sets are usually completely overlapping
(Fig. 3a,b,f,g and Supplementary Fig. 16).
Simulation of SPIM data sets. We simulate a 3D groundtruth
data set that resembles a biological object such as an embryo
or a spheroid (Fig. 2a). The simulated multiview microscope
rotates the sample around the x axis, attenuates the signal, con
volves the input, samples at lower axial resolution and creates
the final sampled intensities using a Poisson process (Fig. 2b).
Finally, the acquired 3D image is rotated back into the orienta
tion of the groundtruth image, which corresponds to the task
of multiview registration in real multiview data sets and results
in the final input stacks for the multiview deconvolution
(Fig. 2c). Computation time is measured until the maximal cross
correlation to the ground truth is achieved. Note that manual
stopping of the deconvolution at earlier stages can reduce noise
in the deconvolved image and optimize computation time.
To simulate the biological object, we use ImgLib2 (ref. 20) to
draw a 3D sphere consisting of many small 3D spheres that have
random locations, size and intensity. We simulate at twice the
resolution of the final groundtruth image and downsample the
result to avoid artificial edges.
An initial rotation around the x axis orients the ground truth
image so that the virtual microscope can perform an acquisition.
However, every transformation of an image introduces artifacts
owing to interpolation. Although on a real microscope this initial
transformation is performed physically and thus does not intro
duce imaging artifacts, it is required for the simulation. To avoid
the situation where artifacts are present in only the simulated
views and not the groundtruth image (Fig. 2a), the groundtruth
image is also rotated by 15° around the rotation axis of the simu
lated multiview microscope, i.e., all simulated input views are
rotated by (n + 15)° around the x axis.
The signal degradation along the light sheet is simulated using
a simple physical model of light attenuation21. With an initial
amount of laser power (or number of photons), the sample will
absorb a certain percentage of photons at each spatial location,
depending on the absorption rate (δ = 0.01) and the probability
density (intensity) of the groundtruth image (Fig. 2b).
To simulate excitation and emission PSFs as well as lightsheet
thickness, we measured effective PSFs from fluorescent beads of a
real multiview data set taken with the Zeiss SPIM prototype and a
40×/0.8NA waterdipping objective. The attenuated image is sub
sequently convolved with a different PSF for each view (Fig. 2b).
To simulate the reduced axial resolution, we sampled every third
slice in the axial (z) direction and every pixel in lateral direction (xy).
This corresponds to the anisotropy of a typical multiview acqui
sition (Supplementary Table 1). The sampling process for each
pixel is an individual Poisson process, with the intensity of the
convolved pixel being its average (Fig. 2b).
To align all simulated views, we first scaled them to an isotropic
volume and then rotated them back into the original orientation
of the groundtruth data (Fig. 2c). Linear interpolation was used
for all transformations.
Nucleibased registration of C. elegans. In order to achieve a
good deconvolution result, the individual views must be registered
with very high precision. To achieve that, we match fluorescent
beads that are embedded into the agarose with subpixel accu
racy14. However, in C. elegans during larval stages, the cuticle itself
acts as a lens, refracting the light sheet, which results in a slight
misalignment of data inside the specimen. We therefore apply a
secondary alignment step, which identifies corresponding nuclei
in between views using redundant geometric local descriptor
matching, and from that estimate an affine transformation model
for each view correcting for the refraction due to the cuticle. The
algorithm works similarly to the beadbased registration14 and is
implemented in Fiji19 as a plugin called “descriptorbased series
registration” (S.P., unpublished software).
Implementation details. The simulation of multiview data
(Fig. 2) and the 3D rendering (Fig. 2a) are implemented in
ImgLib2 (ref. 20). The source code for the simulation is avail
able as Supplementary Software 1; links to the current source
code hosted on GitHub are available in the “readme” file and in
Supplementary Note 8.
The multiview deconvolution is implemented in Fiji19 using
ImgLib2 (ref. 20). Performancecritical tasks are the convolutions
with the PSFs or the compound kernels. They are implemented
using Fourier convolutions, and an alternative implementation
of Fourier convolution is provided for the GPU. Note that it is
currently not possible to implement the entire pipeline on the
GPU owing to the limited size of graphics card memory. All
significant parts of the implementation including perpixel opera
tions, copy and paste of blocks and the fast Fourier transform
are completely multithreaded to allow maximal execution
performance on the CPU and GPU. The source code is available
as Supplementary Software 2; links to the GitHub repository
containing the current source code versions are listed in the
“readme” file and in Supplementary Note 8. Please note that an
updated version of the multiview deconvolution is already shipped
within Fiji. To simply use the deconvolution, building the source
code is not required; an updated Fiji19 is sufficient.
The GPU implementation based on CUDA alternatively executes
the Fourier convolution on Nvidia hardware. The native code is
called via Java Native Access. The source code and precompiled
libraries for CUDA5.5 for Windows 64 bit and CUDA5.0 for
Linux 64 bit are available as Supplementary Software 3. Note
that for Windows the DLL has to be placed in the Fiji directory;
for Linux, in a subdirectory called lib/linux64; and that the
current version of the Nvidia CUDA driver needs to be installed
on the system.
The native CUDA code is platform dependent. If the provided
precompiled libraries do not work, make sure you have the current
Nvidia CUDA driver (https://developer.nvidia.com/cuda
downloads) installed and the Nvidia samples are working. If
Fiji19 still does not recognize the Nvidia CUDA capable devices,
Page 7
© 2014 Nature America, Inc. All rights reserved.
doi:10.1038/nmeth.2929
nature methods
compile the CUDA code from source. You can use CMAKE, which
is set up to compile the code platform independently. Alternatively,
it can be compiled using the following command under Linux:
nvcc convolution3Dfft.cu compileroptions ‘fPIC’ shared 
lcudart lcufft I/opt/cuda5/include/ L/opt/cuda5/lib64 lcuda
o libConvolution3D fftCUDAlib.so
FIJI plugins. The multiview deconvolution is integrated
into Fiji19 (http://fiji.sc/). Please make sure to update Fiji19 before
running the multiview deconvolution. The typical workflow
consists of three steps.
1. Run the beadbased registration on the data (http://fiji.sc/
SPIM_Bead_Registration).
2. Perform a simple average multiview fusion in order to define
the correct bounding box on which the deconvolution should be
performed (http://fiji.sc/MultiView_Fusion).
3. Run the multiview deconvolution using either the GPU
or the CPU implementation (http://fiji.sc/MultiView_
Deconvolution).
Detailed instructions for the individual plugins can be found
on their respective Fiji wiki pages, summarized on this page
http://fiji.sc/SPIM_Registration. Note that owing to the script
ing capabilities of Fiji, the workflow can be automated and exe
cuted on a cluster (http://fiji.sc/SPIM_Registration_on_cluster).
An example data set is available for download: http://fiji.sc/SPIM_
Registration#Downloading_example_dataset.
21. Uddin, M.S., Lee, H.K., Preibisch, S. & Tomancak, P. Microsc. Microanal.
?7, 607–613 (2011).