Advanced Imaging Methods for Long-Baseline Optical Interferometry
ABSTRACT We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
- S. Lacour, E. Thiébaut, G. Perrin, S. Meimon, X. Haubois, E. Pedretti, S. T. Ridgway, J. D. Monnier, J. P. Berger, P. A. Schuller, H. Woodruff, A. Poncelet, H. Le Coroller, R. Millan-Gabet, M. Lacasse, and W. Traub[Show abstract] [Hide abstract]
ABSTRACT: We present infrared interferometric imaging of the S-type Mira star χ Cygni. The object was observed at four different epochs in 2005-2006 with the Infrared-Optical Telescope Array optical interferometer (H band). Images show up to 40% variation in the stellar diameter, as well as significant changes in the limb darkening and stellar inhomogeneities. Model fitting gave precise time-dependent values of the stellar diameter, and reveals presence and displacement of a warm molecular layer. The star radius, corrected for limb darkening, has a mean value of 12.1 mas and shows a 5.1 mas amplitude pulsation. Minimum diameter was observed at phase 0.94 ± 0.01. Maximum temperature was observed several days later at phase 1.02 ± 0.02. We also show that combining the angular acceleration of the molecular layer with CO (Δv = 3) radial velocity measurements yields a 5.9 ± 1.5 mas parallax. The constant acceleration of the CO molecules—during 80% of the pulsation cycle—lead us to argument for a free-falling layer. The acceleration is compatible with a gravitational field produced by a 2.1+1.5 –0.7 solar mass star. This last value is in agreement with fundamental mode pulsator models. We foresee increased development of techniques consisting in combining radial velocity with interferometric angular measurements, ultimately allowing total mapping of the speed, density, and position of the diverse species in pulsation-driven atmospheres.The Astrophysical Journal 11/2009; 707(1):632. · 6.73 Impact Factor -
Article: Principles of Stellar Interferometry
Principles of Stellar Interferometry: , Astronomy and Astrophysics Library. ISBN 978-3-642-15027-2. Springer-Verlag Berlin Heidelberg, 2011. 01/2011; - SourceAvailable from: Faxin Li[Show abstract] [Hide abstract]
ABSTRACT: Enlightened by the principle of scanning probe microscopy or atomic force microscope (AFM), we proposed a novel surface topography imaging system based on the scanning of a piezoelectric unimorph cantilever. The height of sample surface can be obtained by recording the cantilever's strain using an ultra-sensitive strain gauge and the Z-axis movement is realized by electric bending of the cantilever. This system can be operated in the way similar to the contact mode in AFM, with the practical height detection resolution better than 100 nm. Imaging of the inner surface of a steel tube and on a transparent wing of a honey bee were conducted and the obtained results showed that this proposed system is a very promising solution for in situ topography mapping. Microsc. Res. Tech., 2014. © 2014 Wiley Periodicals, Inc.Microscopy Research and Technique 08/2014; · 1.59 Impact Factor
Page 1
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 2, NO. 5, OCTOBER 2008 767
Advanced Imaging Methods for Long-Baseline
Optical Interferometry
Guy Le Besnerais, Sylvestre Lacour, Laurent M. Mugnier, Eric Thiébaut, Guy Perrin, and Serge Meimon
Abstract—We address the data processing methods needed for
imaging with a long baseline optical interferometer. We first de-
scribe parametric reconstruction approaches and adopt a general
formulationofnonparametricimagereconstructionasthesolution
of a constrained optimization problem. Within this framework,
we present two recent reconstruction methods, MIRA and WISARD,
representative of the two generic approaches for dealing with the
missing phase information. MIRA is based on an implicit approach
and a direct optimization of a Bayesian criterion while WISARD
adopts a self-calibration approach and an alternate minimization
scheme inspired from radio-astronomy. Both methods can handle
variousregularizationcriteria.Wereviewcommonlyusedregular-
ization terms and introduce an original quadratic regularization
called “soft support constraint” that favors the object compact-
ness. It yields images of quality comparable to nonquadratic regu-
larizations on the synthetic data we have processed. We then per-
form image reconstructions, both parametric and nonparametric,
on astronomical data from the IOTA interferometer, and discuss
the respective roles of parametric and nonparametric approaches
for optical interferometric imaging.
Index Terms—Fourier synthesis, image reconstruction, optical
interferometry, phase closure.
I. INTRODUCTION
T
today’s technology limits diameters to 10 m or so for ground
based telescopes and to a few meters for space telescopes. Op-
tical interferometry (OI) allows one to surpass the resulting res-
olution limitation, currently by a few factors of ten, and in the
next decade by a factor 100.
Interferometershaveallowedbreakthroughsinstellarphysics
with the first measurements of diameter and more generally of
fundamental stellar parameters, see recent reviews [1], [2]. Star
pulsations have been detected allowing to understand both the
physicsofstarsandthewaytheyreleasematterintheinterstellar
medium. Also, the measurement of the pulsation of Cepheid
HE ultimate resolution of an individual telescope is lim-
ited by its diameter. Because of size and mass constraints,
Manuscript received February 01, 2008; revised August 06, 2008. The as-
sociate editor coordinating the review of this manuscript and approving it for
publication was Dr. Julian Christou.
G. Le Besnerais, L. M. Mugnier, and S. Meimon are with ONERA, 29,
92122 Châtillon Cedex, France (e-mail: lebesner@onera.fr; mugnier@onera.fr;
meimon@onera.fr).
S. Lacour and G. Perrin are with the LESIA, Observatoire de Paris, 92190
Meudon, France (e-mail: sylvestre.lacour@obspm.fr; guy.perrin@obspm.fr).
E. Thiébaut is with CRAL, École Normale Supérieure de Lyon, 46, allée
d’Italie, 69364 Lyon cedex 07, France (e-mail: thiebaut@obs.univ-lyon1.fr).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/JSTSP.2008.2005353
stars has allowed astronomers to establish an accurate distance
scale in the universe. Many subjects have been addressed, a
spectacular one being the shape of fast rotating stars, which are
now clearly known to be oblate by sometimes a huge amount
[3], [4]. With the advent of large telescopes and adaptive optics,
more distant sources, beyond our own galaxy are now acces-
sible. An important result has been the direct study of the dusty
torus around super-massive black holes in the center of these
galaxies, which is the corner stone of the unified theory to ex-
plain the active galactic nuclei phenomenon [5]–[7]. With the
success of large interferometers, and especially of the European
VLTI, interferometry is now used as a regular astrophysical tool
by nonexpert astronomers and many more results are to be ex-
pected with the steadily increasing amount of published mate-
rial.
OI consists in taking the electromagnetic fields received at
each of the apertures of an array (elementary telescopes or
mirror segments) and making them interfere. For each pair of
apertures, the data contain high-resolution information at an
angular spatial frequency proportional to the vector separating
the apertures projected onto the plane of the sky, or baseline.
With baselines of several hundred meters, this spatial frequency
can be much larger than the cut-off frequency of the individual
apertures.
Longbaselineinterferometers,
line-to-aperture ratio is quite large, usually provide a discrete
set of spatial frequencies of the object brightness distribution,
from which an image can be reconstructed by means of Fourier
synthesis techniques. For the time being, interferometers able
to provide direct images are not common: the Large Binocular
Telescope (LBT) , cf. lbto.org/, will be the first of this kind
with a baseline of the same order as the diameter of the two
individual apertures. Recent, comprehensive reviews of OI and
its history can be found for instance in [2], [8].
This paper addresses optical interferometry imaging (OII),
i.e., the data processing methods needed for imaging sources
with today’s long baseline optical interferometers. Many re-
construction methods for OII are inspired from techniques
developed for radio interferometry, as can be seen in the
methods which were compared in the recent Interferometry
ImagingBeautyContests:IBC’04[9]andIBC’06[10].Another
body of work is the set of parametric reconstruction (a.k.a.
model-fitting) methods. This latter class of methods is bound
to remain a reference, partly because in interferometry, optical
data will long remain much more sparse than radio data. In
some instances, e.g. with the very extended object of IBC’06
[10], OII is very difficult even with relatively large data set, and
thus often relies on the information provided by a parametric
forwhichthebase-
1932-4553/$25.00 © 2008 IEEE
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 2
768IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 2, NO. 5, OCTOBER 2008
reconstruction. The latter is used at least as a guidance for
judging the (nonparametric) image reconstruction, and often
as a constraint for the support of the observed object, although
this process is not always explicit.
We adopt a general formulation of nonparametric image
reconstruction as the solution of a constrained optimization
problem. Within this framework, methods may differ by many
aspects, notably: the approximation of the data statistics, the
type of regularization, the optimization strategy and the explicit
or implicit accounting of missing phase information.
We presenttworecent
methods, representative of the two generic approaches for
dealing with the missing phase information. These nonpara-
metric reconstruction methods are evaluated on synthetic and
on astronomical data. The synthetic data allow us to study the
influence of several types of prior knowledge. In particular, we
show that contrarily to what is generally believed, appropriate
quadratic regularizations are able to perform frequency inter-
polation and are suitable for the problem at hand if the object
is compact: we propose a separable quadratic regularization
which favors the object compactness and yields images of
quality comparable to nonquadratic regularizations.
On the astronomical data we demonstrate the operational
imaging capabilities of these methods; for these data, which
may be considered representative of today’s optical long-base-
line interferometers, we show that the parametric approach
remains a choice of reference for OII. Finally, we discuss the
possible associations of both kinds of reconstruction methods.
The paper is organized as follows: Section II presents the
instrumental process so as to define the observation model.
Section III adresses the two main categories of prior informa-
tion used for the reconstruction of the observed astronomical
object: parametric models on the one hand, and regularization
terms for nonparametric reconstruction methods on the other
hand. Section V presents results on real data. Discussion and
concluding remarks are gathered in Section VI.
nonparametric reconstruction
II. OBSERVATION MODEL OF LONG-BASELINE
OPTICAL INTERFEROMETRY
Let us consider a monochromatic source of wavelength
with a limited angular extension. Its brightness distribution can
then be represented by,
portion of the plane of the sky around the mean direction of ob-
servation.
An intuitive way of representing data formation in a
long-baseline interferometer is Young’s double hole experi-
ment, in which the aperture of each telescope is modeled by
a (small) hole letting through the light coming from an object
located at a great distance [11], [12]. At each observation time
, each pair of telescopes yields a fringe pattern with a
spatial frequency of
, where the baseline
the vector linking telescopes
normal to the mean direction of observation. The coherence of
the electromagnetic fields at each aperture is measured by the
visibility or contrast
and the position of the fringes,
which are often grouped together in a complex visibility
with a small
is
andprojected onto the plane
. In an ideal experiment, the
Van Cittert-Zernike theorem [11], [13] states that the coherence
function (hence the complex visibility) is the Fourier transform
(FT) of the flux-normalized object
frequency
. Let us introduce notations for Fourier
quantities
at spatial
(1)
(2)
(3)
In ground based interferometry, interferometric data are
corrugated by the atmospheric turbulence. Inhomogeneities in
the air temperature and humidity of the atmosphere generate
inhomogeneities in the refractive index of the air, which perturb
the propagation of light waves through the atmosphere. These
perturbationsleadtospaceandtimevariationsoftheinputpupil
phase
, which can be modeled by a Gaussian spatio-temporal
random process [14]–[16]. The spatial behavior of this process
is generally described by the Fried’s diameter
smaller
, the stronger the turbulence. Typically, its value is
about 15–20 cm for
evolution time
of the turbulent phase is given by the ratio
ofto the velocity dispersion of turbulence layers in the
atmosphere
[14]: a typical value is a few milliseconds
at
. In the sequel, short exposure (respectively,
long exposure) refers to data acquired with an integration time
shorter (respectively, markedly longer) than
[17]. The
at good sites. The typical
.
A. Short Exposure System Response
Forapertures ofdiameter
coherence due to the turbulence perturbations reduces the vis-
ibility of the fringes. This can be counterbalanced if the wave-
fronts are corrected by adaptive optics [18] (AO), at a rate faster
than
, before the beams are made to interfere. In the sequel,
we assume that each aperture is indeed either small enough or
corrected by AO. Note, however, that it is possible to operate in
the multi-speckle mode [19].
IntheYoung’sholesanalogymentionedabove,theremaining
effect of turbulence on interferometric measurements is to add
a phase shift (or piston)
going through it. The interference between two apertures
are thus outof phasebya random “differential piston”
, whose typical evolution time is of the order of
depends on the baseline [20].
A short exposure observation finally writes
notablylargerthan,thelossof
at each apertureto the wave
and
and
(4)
(5)
When a complete interferometer array of
used, i.e., one in which all the possible two-telescope baselines
can be formed simultaneously, there are
visibilityphasemeasurements(5)foreachinstant .Theseequa-
tions can be put in matrix form
telescopes is
(6)
where the baseline operator
mally defined in Appendix A.
of dimensionsis for-
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 3
LE BESNERAIS et al.: ADVANCED IMAGING METHODS FOR LONG-BASELINE OI769
Fig.1. FrequencycoveragesobtainedwiththeIOTAinterferometeron?Cygni
(observing run of May 2006). For a given baseline ? ? ?
channel of mean wavelength ?, the measured frequency is ? ? ? ? ?????
? ? ?
?????.
??? and a given spectral
?
Note that the baseline
pends on time. Indeed, the aperture configuration as seen from
the object changes as the Earth rotates. It is thus possible to use
“Earth rotation synthesis”, a technique that consists, when the
source emissiondoesnotvaryintime, inrepeatingthemeasure-
ments in the course of a night of observation to increase the fre-
quencycoverageoftheinterferometer.Atypicalfrequencycov-
erage obtained with the IOTA interferometer (see Section V-A)
is presented in Fig. 1. This Fourier coverage can be formally
represented by a short-exposure transfer function
between aperturesandde-
(7)
where
all observation instants and used pairs of telescope.
The complex gains
originate from various causes. Some of them can be estimated
using observations of a calibrator (i.e., a star unresolved by the
interferometer, or whose diameter is precisely known, located
near the object of interest and with similar spectral type) and
compensated for. In the following, we consider that the
are pre-calibrated, i.e.,
Equation (7) and Fig. 1 provide a first insight on the data pro-
cessing problem at hand. It is a Fourier synthesis problem, i.e.,
it consists in reconstructing an object from a sparse subset of its
Fourier coefficients. As shown by Fig. 1, interferometry gives
access to very high frequency coefficients, but the number of
data is verylimited(afewhundreds). Measuringthese datawith
a sufficient signal-to-noise ratio (SNR) is quite delicate. Indeed,
in a short exposure, the differential pistons are expressed by
random displacements of the fringes without attenuation of the
contrast. But in long exposure measurements, averaging these
displacements leads to a dramatic visibility loss: a specific av-
eraging process must be used, as described in the next section.
denote the Dirac function and summations extend over
account for visibility losses that
.
B. Long Exposure Data
1) Principle: As mentioned above, the main obstacle to
long exposure data measurement is the differential pistons
which affect the phase of the visibility. On the one hand,
averaging the modulus of the visibility is possible; on the other
hand, some phase information can be obtained by carrying out
phase closure [21] before the averaging. The principle is to
sum short-exposure visibility phase data
measured on a triangle of telescopes
From (5), one can check that turbulent pistons are canceled out
in the closure phase defined by
and
.
(8)
To form this type of expression it is necessary to measure three
visibilityphasessimultaneously,andthustouseanarrayofthree
telescopes or more.
In the case of a complete interferometer array of
scopes, the set of closure phases that can be formed is generated
by, for instance, the
,
phases measured on the triangles of telescopes including tele-
scope
. There are
closure phases. In what follows, the vector grouping together
these independent closure phases will be noted
sure operator
is defined such that
tele-
, i.e., the closure
of these independent
and a clo-
The second equation is a matrix version of (8): the closure op-
erator cancels the differential pistons, a property that can be
written
, with the baseline operator introduced in
(6) and Appendix A. It can be shown [22] that this equation im-
pliesthattheclosureoperatorhasa kernelofdimension
given by
,
(9)
where
The closure phase measurement thus does not contain all the
phase information. This classical result can also be obtained by
counting up the phase unknowns for each instant of measure
. There are unknown object visibility phases
and
independent measured phase clo-
sures, which gives
missing phase data. In other words,
optical interferometry through turbulence is a Fourier synthesis
problem with partial phase information. As is well known, the
moreaperturesinthearray,thesmallertheproportionofmissing
phase information.
2) Data Reduction and Averaging: In practice, the basic ob-
servables of optical interferometry are then sets of three simul-
taneous fringe patterns obtained on a triangle of telescopes. The
outputofthepre-processingstage(seeforinstance[23]forade-
scription of the pre-processing with IOTA data) are as follows.
• Power spectra
is obtained by removing the first column from.
:
(10)
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 4
770 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 2, NO. 5, OCTOBER 2008
• Bispectra
, , defined by
(11)
Notation
instant . The integration time
spatial frequency to be considered constant during the integra-
tiondespitetherotationoftheEarth.Italsoimpactsthestandard
deviation of the residual noises on the measurement. Equation
(10) and (11) are biased estimates of the object spectrum and
bispectrum, see [24], [25] for the expressions of these bias.
The phases of the bispectra
expresses the averaging in a time interval around
must be short enough for the
constitute unbiased long-exposure closure phase estimators.
3) Observation Model and Data Likelihood: Using notation
and concatenating all instantaneous
measurements in vectors denoted by bold letters, the long-ex-
posure observation model writes
(12)
Noisetermsareusuallyonlycharacterizedbyestimatedsecond-
order statistics, hence they are modeled as Gaussian processes:
,
matrices
and are generally assumed to be diagonal
(as, for instance in the OIFITS data exchange format [26]), al-
though correlations are for instance produced by the use of the
same reference stars in the calibration process [27].
Note that the observation model (12) corresponds to a
minimal dataset for a complete
eter. In practice, the data may contain closures without the
corresponding power spectra, or bispectra amplitudes. These
supplementary data are not processed the same way by all the
data reconstruction methods, in particular by the MIRA and
WISARD algorithms described in Section IV.
The neg-log-likelihood derived from this observation model
writes
. Covariance
-telescope interferom-
(13)
The notation
visibility residuals at time
denotes the statistics of the squared
In the sequel, we shall use the term “likelihood” to denote the
various goodness-to-fit terms such as (13) derived from the dis-
tribution of the data.
The closure term
is usually also a
closures residuals, but in order to account for phase wrapping
and to avoid excessive nonlinearity, the term
the measured phase closures can also be chosen as a weighted
quadratic distance between the complex phasors
over phase
related to
(14)
where
closure
is the model of the measured phase
.
III. OBJECT MODELS
Imaging amounts to finding a flux-normalized positive func-
tion
defined over the support
way is to minimize the likelihood (13). Three problems are then
encountered.
1) Under-determination: because of the noise, the object
which minimizes the likelihood is not necessarily the good
solution: actually, several objects are compatible with the
data. This is a usual situation in statistical estimation,
which is here emphasized by the small number of mea-
sured Fourier coefficients, the noise level and the missing
phase information.
2) Nonconvexity: the phase indetermination leads to a non
convex1and often multi-modal data likelihood.
3) Non-Gaussian likelihood: phase and modulus measure-
ments with Gaussian noise leads to a non-Gaussian
likelihood in
. In other words, even if all the visibility
phases were measured instead of just the closure phases,
the data likelihood would still be nonconvex. We shall
come back to this point in Section IV-B.
To deal with under-determination, one is led to assume some
further prior knowledge on the object. In this section we re-
view two approaches: parametric modeling and regularized re-
construction.
which fits the data (12). One
A. Parametric Models
1) Introduction: The object is sought by minimization
of (13) using a parametric form
oftenexhibitsfurthernonlinearities,butasthe
number of parameters is very limited (typically
global minimization is achievable. The minimal value of the
criterion gives an information on whether the chosen model is
appropriate to describe the brightness distribution of the object.
Additionally, the second derivative of
minimum allows the estimation of error bars.
For years, interferometric data were very sparse, essentially
because the number of telescopes in interferometers was quite
small. Most interferometers were two-telescope arrays and in
few cases three telescopes were available. The only way to in-
terpret the data was then to use parametric models with a very
small number of parameters, typically two or three. Among the
most used models, let us mention the uniform disk to measure
stellar diameters, and binary system models.
When objects are as simple as individual or binary regular
stars, such simple models can be used beforehand to prepare
the observations and anticipate likely visibility values. This
is very useful to conduct “baseline bootstrapping”, a process
which consists in observing a visibility of very low SNR using
a triangle of telescopes with two other baselines having a higher
SNR. Simple parametric models are also used to compute the
expected visibility of reference stars in order to calibrate the
. The resulting criterion
around its
1Convexity is a desirable property of a criterion when a minimization process
is conducted, which can furnish sufficient conditions for convergence of iter-
ative local optimization techniques toward a global minimum. A well-known
reference is the book by R.T. Rockafellar [28].
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 5
LE BESNERAIS et al.: ADVANCED IMAGING METHODS FOR LONG-BASELINE OI771
response of the interferometer and overcome residual visibility
losses, such as those due to polarization effects.
Now that current interferometers yield richer data, more so-
phisticated models can be used. It is outside the scope of this
paper do describe the large number of parametric models which
are used nowadays, however we present in the following sub-
sections two trends in parametric modeling.
2) Fitting of a Geometrical Model: Parametric inversion can
be used to derive the geometrical structure of the brightness dis-
tribution of the object. An example among others is the deriva-
tion of the brightness profile
darkening is an optical depth effect, which results in a drop of
the effectivetemperature (and hence intensity) towards the edge
of the stellar disk. Numerous types of limb-darkening models
exist in the literature. To cite only two, one can use a power law
[29] as follows:
of a limb-darkened disk. Limb
(15)
or a quadratic law [30]
(16)
where , the cosine of the azimuth of a surface element of the
star, is equal to
from the star center, and
is the angular diameter of the pho-
tosphere. The parametric fit is actually done on complex visi-
bilities. In the Fourier domain, the power law limb darkening
model yields [31]
, being the angular distance
(17)
where the parameters are
frequency and
The quadratic law model yields
, is the radial spatial
the Euler gamma function.
(18)
where the parameters are
are the first and second-order Bessel functions, respectively,
and
,; and
(19)
3) Physical Parameter Determination: An interesting possi-
bility offered by parametric inversion is to directly adjust phys-
ical parameters of the objects. An example can be found on a
study about the star
Cep from FLUOR interferometric obser-
vation [32]. Data was fitted with an analytical expression of the
brightness distribution that includes a temperature for the pho-
tosphere and a radiative transfer model of the molecular layer.
The model used is radial, and writes
(20)
for
and
(21)
areotherwise,wherethe parameters
, withand
the diameters of the star and the molecular layer respectively,
and
the opacity of the molecular layer as a function of the
wavelength.
is the Planck function.
This model illustrates how to obtain a direct estimation of the
temperatures of the star and of the molecular layer from inter-
ferometric data. Interestingly, this type of model allows an ex-
ploitation of multi-wavelength observations that takes into ac-
count the chromaticity of the astronomical object.
B. Regularized Reconstruction
1) Introduction: In this framework, the sought object distri-
bution
is represented by its projection onto a basis of func-
tions, often defined as a shifted separable kernel basis
(22)
where dimensions
chosen so as to span the object support
Shannon–Nyquist condition with respect to the experimental
frequency coverage. Kernels
functions, sometimes wavelets or prolate spheroidal functions
[33], [34]. The estimation aims at finding the coefficients
, and sampling steps, are
and to satisfy the
are often box functions or sinc
so as to fit the data.
This approach is sometimes loosely called a non parametric
approach because the parametrization (22) is here not to put
further constraints on the solution, but only to allow its numer-
ical computation. To tackle the under-determination the data
likelihood (13) is combined with a regularization term in a
criterion of the form
(23)
Theregularizationterm
theobject(smoothness,positivity,compactness,etc.).Theregu-
larization parameter
allows to tune the regularization strength
or, equivalently, to select a data term level set
Of course, the choice of
depends on the noise level. The com-
pound criterion (23) can be derived within a Bayesian para-
digm: the data model (12) is translated into a data likelihood
(13) which is combined with a prior distribution on the object
by Bayes’ rule to form the a posteriori distribution. Maximiza-
tion of the posterior distribution is equivalent to minimization
of the likelihood, i.e., of a regularized criterion such as (23).
Most regularization terms penalize the discrepancy between
the current solution and some a priori object
the null object. In the OII context of very sparse and ambiguous
enforcesthedesiredpropertiesof
.
, be it simply
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 6
772 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 2, NO. 5, OCTOBER 2008
datasets, the use of a meaningful prior object can be an efficient
way to orient the reconstruction and to improve the results. In
IBC’06 for instance, an a priori object was finally provided to
the participants: see [10, Fig. 3].
previous observations of the source with other instruments, or
derived from the fit of a parametric model to the interferometric
data at hand—see Section V-C below.
In the following, we briefly discuss the choice of the regular-
ization terms and introduce an original regularization criterion
that can be used on compact objects to enforce a “soft support”
constraint.
2) Quadratic Regularization: Quadratic regularization has
been applied to Fourier synthesis and OII by A. Lannes et al.
[34]–[36]. For relatively smooth objects, one can use a corre-
lated quadratic criterion expressed in the Fourier domain, with
a parametric model of the object’s power spectrum
model was proposed for deconvolution of AO corrected images
in [37]
can be obtained from
. Such a
(24)
This model, which relies on a prior object and three “hyper-pa-
rameters”
has been used in various image reconstruc-
tion problems, including OII [9], [38]. Parameter
frequency which is typically the inverse of the diameter of the
object’s support and avoids divergence at the origin,
terizes the decrease rate of the object’s energy in the Fourier
domain, and
plays the role of the inverse of hyperparameter
of (23) and can replace this parameter. As already mentioned,
anadvantageofquadraticcriteriaisthatitispossibletoestimate
thehyper-parameters,bymaximumlikelihoodforexample[39].
A simple and efficient quadratic regularization is a separable
quadratic distance to the prior object
show that the general expression of such regularization terms
undertheOII-specificconstraintsofunitsumandpositivity[see
(40)] is
is a cutoff
charac-
. In Appendix B, we
(25)
where the a priori object
and normalized to unity.
Intheabsenceofameaningfulobjecttobeusedfor
width
of the observed source is usually more or less known,
sowehavefoundthatareasonableaprioriobjectisanisotropic
one such as the Lorentzian model
. Such a prior object can then be seen as enforcing a
loose support constraint.
3) Edge-Preserving Regularization: For extended objects
with sharp edges such as planets, a quadratic criterion tends
to over-smooth the edges and introduce spurious oscillations,
or ringing, in their neighborhood. A solution is thus to use an
edge-preserving criterion such as the so-called quadratic-linear,
or
criterion, which are quadratic for weak gradients
of the object and linear for the stronger ones. The quadratic (or
) part ensures good noise smoothing and the linear (or
part cancels edge penalization. Here we present an isotropic
is chosen to be strictly positive
,the
)
version [40] of the criterion proposed by Rey [41] in the context
of robust estimation and used by Brette and Idier in image
restoration [42]
(26)
(27)
(28)
with
ferences in the two spatial directions. The two parameters to be
adjustedareascaleparameter andathresholdparameter .Pa-
rameter
plays the same role as the conventional regularization
parameter
andcanreplaceit,with
values of
each term of (26) reads
.
4) Spike-Preserving or Entropic Regularization: For objects
composedofbrightpointsonafairlyextendedandsmoothback-
ground, such as are often found in astronomy, a useful regular-
ization tools is entropy. Here, we adopt the pragmatic point of
view of Narayan and Nityananda [43] and consider that entropy
is essentially a separable criterion
and the gradient approximations by finite dif-
;indeed,forsmall
(29)
where each pixel is drawn toward a prior value
cording to a nonquadratic potential
Classical examples of “entropic potential” are the Shannon en-
tropy
ac-
also termed neg-entropy.
and the Burg entropy
but many other non quadratic
potentials can be used, as shown in [43]. The major interest of
the nonlinearity of entropic potentials is that they help to inter-
polate holes in the frequency coverage. Side effects are empha-
sizing spikes and smoothing low level structures. As it result in
ripples suppression in the flat background and enhanced spa-
tial resolution near sharp structures, this behavior may be con-
sidered as beneficial in the context of interferometric imaging
though it also introduces substantial biases. Note that the inter-
ferometric imaging method BSMEM [44], winner of the IBC’s
2004 and 2006 [9], [10], or the VLBMEM method [9] are based
upon entropic regularization with potential
Here we propose an entropic-like criterion which re-employs
the potential
of (28) in a “white
same tools as in Appendix B, it can be shown that the general
form of a white
prior under the OI-specific constraints
of unit sum and positivity is
.
” prior. Using the
(30)
where
first derivative.
A interesting refinement of such priors is to model the ob-
served object with the combination of a correlated
ization for the extended component of the object and a white
regularization for the bright spots [45].
is a function such as (28), anddenotes its
regular-
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 7
LE BESNERAIS et al.: ADVANCED IMAGING METHODS FOR LONG-BASELINE OI773
IV. ALGORITHMS FOR REGULARIZED IMAGING
The regularized criterion (23) is not strictly convex and actu-
ally often multi-modal, because of missing phase information.
Therefore, the solution to the OII problem cannot be simply de-
fined as the minimizer of (23). Actually, it should be defined as
the point where the minimization algorithm stops. Most OII al-
gorithms (BSMEM, MIRA, WISARD) are iterative local descent
algorithmswiththeexceptionofMACIM[10]whichusessimu-
latedannealingtosearchfortheobject’ssupport.Inthissection,
we present two iterative algorithms designed for OII: MIRA and
WISARD. Both are at the state of the art: WISARD ranked second
in IBC’04, while MIRA ranked second in IBC’06 and very re-
cently won IBC’08. They are able, like MACIM, to handle var-
ious prior terms—while BSMEM is dedicated to entropic regu-
larization. They differ, however, in their treatment of the phase
problem. There are essentially two approaches of this problem:
explicitalgorithmsuseasetofphasevariablesandproceedwith
a joint minimization overthese variables and the object , while
implicit approaches search for a minimum of (23) with respect
to ,oftenwith someheuristicsin ordertoavoidgetting trapped
in local minima. Explicit algorithms include VLBMEM [9],
WIPE [46], [47] and WISARD. BSMEM, MACIM and MIRA are
implicit algorithms. In this respect, MIRA and WISARD are rep-
resentative of the two main streams of current OII algorithms.
A. Direct Minimization (MIRA)
The MIRA [48], [49] method (MIRA standsfor Multi-aperture
Image Reconstruction Algorithm) seeks for the image by min-
imizing directly criterion (23). MIRA accounts for power spec-
trum and closure phase data via penalties defined in (13) and
(14). MIRA implicitly accounts for missing phase information,
as it only searches for the object
tempt to explicitly solve degeneracies, it can be used to restore
an image (of course with at least a 180 orientation ambiguity)
from the power spectrum only, i.e., without any phase informa-
tion, see examples in [49], [50].
To minimize the criterion, the optimization engine is
VMLMB [51], a limited memory variable metric algorithm
which accounts for parameter bounds. This last feature is used
to enforce positivity of the solution. Only the value of the cost
function and its gradient are needed by VMLMB. Normaliza-
tion of the solution is obtained by a change of variables, i.e.,
the image brightness distribution becomes
where
are the internal variables seen by the optimizer with
the constraints that
, . Thus,
positive. The gradient is modified as follows:
. Since MIRA does not at-
,
is both normalized and
(31)
To avoid getting trapped into a local minimum of the data
penalty
which is multi-modal, MIRA starts the mini-
mization with a purposely too high regularization level. After
convergence, the reconstruction is restarted from the previous
solution with a smaller regularization level (e.g. the value of
is divided by two). These iterations are repeated until the
chosen value of
is reached. This procedure mimics the more
clever strategy proposed by Skilling & Bryan [52] and which is
implemented in MemSys the optimization engine of BSMEM.
B. A Self Calibration Approach (WISARD)
The self calibration approach developed in [22], [38], [53]
relies on an explicit modeling of the missing phase information
and allows one to obtain a convex intermediate image recon-
struction criterion. It is inspired by self-calibration algorithms
in radio-interferometry [54], but uses a more precise approxi-
mationoftheobservationmodelthanfirstattempts suchas[47].
This approach consists in jointly finding the object
-dimensional phase vector
components in the closure operator kernel of (9). It starts from a
generalized inverse solution to the phase closure equation (12),
using the operator
left to (12) and (9), the missing phase components are made
explicit
and an
, corresponding to phase
. By applyingon the
(32)
Itisthustemptingtodefineapseudo-equationofvisibilityphase
measurement by identifying the term
a noise affecting the visibility phase [55]
of (32) with
(33)
Unfortunately, as matrix
rigorously possible and one is led to associate an ad hoc covari-
ance matrix
with the term
fit the statistical behavior of the closures. Recently, [22], [38]
have discussed possible choices for
Finding a suitable approximation for the covariance of the
amplitude measurements (12), cf. [22] and [38], gives a “my-
opic” measurement model, i.e., one that depends on the un-
knowns
and
is singular, this identification is not
so as to approximately
.
(34)
with Gaussian noise terms on the modulus and on the ampli-
tude. Still, the resulting likelihood is not quadratic with respect
to , because a Gaussian additive noise in phase and modulus
is not equivalent to an additive complex Gaussian noise on the
visibility. This is the problem of converting measurements from
polarcoordinatestoCartesianones,whichhaslongbeenknown
in the field of radar [56] and was identified only very recently in
optical interferometry [57]. The myopic model of (34) is thus
further approximated by a complex additive Gaussian model
such as
(35)
The mean value and covariance matrix of the additive complex
noise term
can be chosen so that the corresponding
data likelihood criterion is convex quadratic w.r.t. the complex
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 8
774IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 2, NO. 5, OCTOBER 2008
Fig. 2. Synthetic object from IBC’04. Left: original full resolution true object. Right: true object convolved by the PSF of a 132-m perfect telescope.
while remaining close to the
real nonconvex likelihood [22], [57]. Finally, using (33)
(36)
where
instantaneous frequency coverage at time and
ponent-wisemultiplicationofvectors.Asclearlyshownby(36),
the resulting model is now linear in
This last step leads to a data fitting term
quadratic in the real and imaginary parts of the residuals—see
[22] and [38] for a complete expression. As discussed in
Section III, this data term is then combined with a convex
regularization term, so as to obtain a composite criterion
is the discrete time FT matrix corresponding to the
denotes com-
for a fixed.
that is
(37)
Let us emphasize the interesting properties of
hand
. On one
is convex in ; on the other hand,
is separable over measure-
ment instants , which allows handling the phase step by several
parallel low-dimensional optimizations.
The WISARD algorithm makes use of these properties and
minimizes
alternately in
the current . The structure of WISARD is the following: after a
first step that casts the true data model into the myopic model
(34), a second step “convexifies” the obtained model w.r.t. , to
obtainthe modelof(36). Aftertheselection oftheguess and the
prior, WISARD performs the alternating minimization.
For the moment, this approach is less versatile than a di-
rect all-purpose minimization method such as MIRA: WISARD
cannot cope with missing phase closure information or take into
account bispectrum moduli. Indeed, as the pseudo-likelihood
associated to model (36) is derived, data that do no fit this re-
casting stage are not taken into account. Extending WISARD to
make it more versatile in the above-mentioned sense deserves
some future work.
for the currentand infor
C. Results on Synthetic Data
This section presents results of nonparametric reconstruction
methods on the synthetic interferometric data that were pro-
Fig. 3. Frequency coverage from the IBC’04.
ducedbyC.Hummelforthe2004InternationalImagingBeauty
Contest(IBC’04)organizedbyP.LawsonfortheIAU[9].These
data simulate the observation of the synthetic object shown in
Fig. 2 with the NPOI 6-telescope interferometer. The corre-
sponding frequency coverage, shown in Fig. 2, contains 195
square visibility moduli and 130 closure phases. The resolution
of the interferometric configuration, as given by the ratio of the
minimum wavelength over the maximum baseline, is 0.9 mas.
In Fig. 2 right, we present the image that a 132-m perfect
telescope would provide of the object. The cutoff frequency of
such an instrument would be twice the maximum value of the
frequency coverage used to produce the synthetic dataset (see
Fig. 3). It is therefore relevant to compare the reconstructions
with this image.
Various results of MIRA with quadratic regularizations are
presented in Fig. 4. The top image is essentially a “dirty recon-
struction”: it uses a separable quadratic penalty with a very low
value of regularization parameter
tained in the same setting, but with a positivity constraint. The
improvement is dramatic, as both the object support and its low
resolution features are recovered. An interpretation is that the
positivity plays the role of a floating support constraint, which
. The middle image is ob-
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 9
LE BESNERAIS et al.: ADVANCED IMAGING METHODS FOR LONG-BASELINE OI 775
Fig. 4. Results on IBC’04 with Mira. Top: “dirty” reconstruction (see text).
Middle:positivityconstraint.Bottom:softsupportquadraticregularization(25),
with prior object Lorentzian of 5 mas FWHM and 0.1 mas pixel size. All recon-
structions have 256 ? 256 pixels.
favors smooth spectra and interpolates the missing spatial fre-
quencies. The bottom image uses the soft support quadratic reg-
ularizationof(25),withaLorentzianof5masFWHMasaprior
object and a positivity constraint. This regularization, although
quadratic, leads to a very good reconstruction, with a central
peak clearly separated from the asymmetric shell.
Fig. 5 presents two WISARD reconstructions. The left one
is obtained with the same soft support quadratic regularization
than the MIRA reconstruction Fig. 4, bottom. Although MIRA
and WISARD are based on different criteria and follow different
paths during the optimization, the reconstructions are visually
very close. With such a “rich” 6-telescope dataset the missing
phase information (33%) is reasonable and the differences be-
tween reconstructions, when they are present, originate mainly
from the choice of different regularization terms. As an ex-
ample, a reconstruction based on the white
serving prior of (30) with a constant prior object is shown in
Fig. 5, right. This last reconstruction presents finer details than
quadratic ones, possibly even finer than the smoothed object of
Fig. 2, at the price of some artefacts on the asymmetric shell.
However, the validity of these details is difficult to assess.
As a conclusion, the proposed soft support quadratic regular-
ization yields images of quality comparable to those obtained
with spike-preserving priors. Contrarily to what is generally
believed (see for instance Narayan and Nityananda [43]), spe-
cial-purpose quadratic separable regularizations are perfectly
suitable for image reconstruction by Fourier synthesis as soon
as the object is compact and positivity constraints are active.
spike-pre-
V. PROCESSING REAL DATA
A. The Infrared Optical Telescope Array (IOTA)
The IOTA interferometer, operated from 1993 to 2006 (cf.,
tdc-www.harvard.edu/iota/) on Mt Hopkins (Arizona, USA),
had variable baseline lengths and thus gave access to a broad
frequency coverage. It operated with three 45 cm siderostats
that could be located at different stations on each arm of an
L-shaped array (the NE arm is 35 m long, the SE arm 15 m).
The maximum nonprojected baseline length was 38 m, and the
minimum one 5 m. It used fiber optics for spatial filtering, and
an integrated optics beam combiner called IONIC [58]. It was
decommissioned in July 2006.
B. The Dataset
The dataset presented here correspond to observations of the
star
Cygni. Of the class of the Mira variables,
an evolved star whose extended atmosphere is puffed up by
the strong radiation pressure induced by fusion of metals (here,
metals means chemical elements heavier than Helium) in its
core. This late stage of evolution is appropriate for interfero-
metric imaging since the large stellar radius can be resolved by
optical interferometers. Moreover, these stars are usually bright
in the infrared, allowing robust fringe tracking.
Cygni was observed during a six-night run in May 2006.
Night-time is used for observing and daytime is used to move
and configure the telescopes. The log of the interferometer con-
figurations is presented in Table I.
The reduced dataset is plotted in the two panels of Fig. 6.
Cygni was observed over the whole H Band
, and fringes were dispersed in order to obtain spectral
information. In this paper, we shall not address the chromaticity
of the object. Therefore, we use the diverse wavelengths only
as a way to increase the Fourier coverage. The frequency plane
coverage was previously presented in Fig. 1. The visibilities are
Cygni is
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 10
776IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 2, NO. 5, OCTOBER 2008
Fig. 5. Results on IBC’04 with Wisard. Left: 120 ? 120 pixels reconstruction with 0.125 mas pixel size using a soft support quadratic regularization (25), with
prior object Lorentzian of 2.5 mas FWHM. Right: 60 ? 60 pixels reconstruction with 0.25 mas pixel size based on a white ? ? ? regularization (30) with a
constant prior object and parameters ? ? ?, ? ? ?????.
TABLE I
? CYGNI OBSERVING LOG
Configuration refers to the location in meters of telescopes A, B, C on the
NE, SE, and NE arms, respectively. Position “0” corresponds to the arms’
intersection.
presented in the upper panel of Fig. 6 as a function of the base-
line length in wavelength units. The closure phases are plotted
on the bottom panel. Due to the difficulty to represent these
phases as a function of a physical parameter, we simply present
them as a function of the observation data-point number. The
vertical lines indicate a change of interferometer configuration.
A closure phase equals to zero or
symmetric object. Thus, a preliminary inspection of the clo-
sure phases can show the presence of asymmetries. The higher
the frequency, the more apparent the asymmetry is. This makes
sense to an astronomer because photospheric inhomogeneities
are likely to be present at a smaller scale than the size of the
photosphere. In the case of
Cygni, the photosphere’s size es-
timate is 21.3 mas, to be compared with the resolution of the
interferometer, slightly less than 5 mas.
corresponds to a centro-
C. Image Reconstruction
In Fig. 7, we present three reconstructed images, obtained
using different methods and priors.
The first image corresponds to a parametric inversion of the
data using all the available spectral channels merged together.
Thejustificationforsuchapolychromaticprocessingofthedata
is ongoing work, however first results confirm a variation of the
angular diameter of less than 1 mas in the H band (S. Lacour,
private communication, 2008).
As stated, the quality of the reconstruction will depend
heavily on the correctness of the model of the object. For-
tunately, Mira variables are not completely unknown, and
Fig. 6. IOTAdataset on ? Cyg. Top panel: Visibility square? ? ?
of the baseline length. Bottom panel: closures phases ? ? ?
the observation number. Labels of the type Axx-Bxx-Cxx correspond to the
telescope configurations (see Table I).
as a function
as a function of
previous astronomical observations tell us that the star is ex-
pected to possess: i) a large limb-darkened photosphere; ii)
important asymmetries of theform of photospheric “hot-spots”;
and iii) a close, warm, molecular layer surrounding the photo-
sphere at around one stellar radius of the photosphere.
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 11
LE BESNERAIS et al.: ADVANCED IMAGING METHODS FOR LONG-BASELINE OI777
Fig. 7. Reconstructed images of the star ? Cygni, right: contour plot with
levels 10%, 20%, ???, 90% of the maximum. From top to bottom: parametric
reconstruction, WISARD with white ? ? ?
quadratic penalization towards a prior parametric solution. Details on the
different methods are given in Section V-C. Note the colossal size of the star:
at the distance of ? Cygni (170 pc; [59]), 5 mas correspond roughly to the
distance Sun-Earth (1 astronomical unit).
prior, MIRA with a separable
This simplistic theoretical model (i.e., a limb darkened disk,
a spot and a spherical thin layer) is converted to a geometrical
parametricmodel,whichisadjustedtothedatathroughthemin-
imizationof
of(13).Theimagepresentedintheleft
panel of Fig. 7 corresponds to the geometrical model with best
fit parameters.
These parameters give direct information on the structure of
the object, and error bars can be estimated: the star diameter is
21.49
0.11masandthehotspotcontrastis1.70
that the requirements of a parametric reconstruction, in terms
of frequency coverage, are much less stringent than that of a
nonparametric one. Thus, parametric inversion can also be used
with each spectral channel separately, to determine the spectral
energy distribution of the surrounding atmospheric layer.
The second image was produced using the WISARD software
described in Section IV-B with a white
(26). The last image was reconstructed using MIRA, see
Section IV-A, and more importantly, using a prior solution
in the white quadratic setting of (25). The prior solution is a
0.04%.Note
prior—see
limb-darkened disk whose parameters are determined by model
fitting on the visibilities.
D. Discussion
Fig. 7 shows that, for the sparse data at hand, the more strin-
gent the prior, the more convincing the reconstruction looks to
an astronomer.
Moreprecisely,thewhite
not allow to distinguish more than a resolved photosphere and
the fact that some asymmetry is present. The form of the re-
constructed photosphere and its surrounding can be questioned
whencomparedtowhatisexpectedfromthetheory.Besides,on
thepresentedreconstructions,MIRAwasusedwithamuchmore
informative prior and is in good agreement with the parametric
reconstruction. This image is however interesting because the
reconstruction is notably different from a simple disk, and adds
an asymmetry—a “hotspot”—on the surface. The presence of
an asymmetry could be foreseen by looking at the raw closure
phases (right panel of Fig. 6). The fact that this asymmetry ap-
pears similarly—in terms of flux and position—on the para-
metricandonthenonparametricimagereconstructionsisacon-
vincing argument to validate both images.
Note that, on the MIRA reconstruction, an emission sur-
rounding the photosphere is present, but its reality is difficult to
assert on the reconstructed image. Hence, it should be pointed
out that neither of the nonparametric reconstructions exhibits
the molecular layer which is revealed by the parametric recon-
struction.
priorusedbyWISARDdoes
VI. CONCLUSIONS
In recent years, long baseline optical interferometers with
better capabilities have become available. Routine observations
with three or more telescope interferometers have become a re-
ality.Althoughquitesparsewithrespecttoradioarrays,thespa-
tial frequency coverage allows one to study more complex ob-
jectsandtoreconstructimages.Inthispaper,wehavedescribed,
besides the parametric reconstruction approach, two nonpara-
metricimagereconstructionmethods,MIRAandWISARD.MIRA
is based on the direct optimization of a Bayesian criterion while
WISARD adopts a self-calibration approach inspired from radio-
astronomy. As such, these two methods are representative of
the two families of state-of-the-art nonparametric reconstruc-
tion methods [9], [10].
On rich-enough data, which are currently available only from
simulations, both methods demonstrate a valuable and compa-
rable capability for imaging complex objects. On such data,
the differences between reconstructions originate mostly from
the choice of different regularization terms. We have reviewed
common regularization criteria and proposed an original regu-
larization criterion that can be used on compact objects to en-
force a “soft support” constraint. This criterion, although it is
quadratic, yields images of quality comparable to that obtained
with spike-preserving priors on the IBC’04 dataset.
We have demonstrated the operational imaging capabilities
of these methods on a IOTA dataset of
these data, which may be considered representative of today’s
optical long-baseline interferometers, we have shown that the
parametric approach remains a choice of reference for OII.
Cygni. However, for
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 12
778IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 2, NO. 5, OCTOBER 2008
The experience gathered while trying to extract the most in-
formation from real-world data, both in the work described here
andelsewhere[31],suggeststhattheoptimalprocessingofmea-
surements from present-day interferometers should make use
of both approaches in an alternate fashion as described below.
With a sparse frequency coverage, a parametric reconstruction
is useful to obtain ab initio a first estimate for the observed ob-
ject. A parametric reconstruction will not reveal any unguessed
feature, but it can be used to guide nonparametric reconstruc-
tions as an initial guess or as a prior object for instance. Then
thereconstructedimagesareveryusefultounderstandthestruc-
ture of a complex object since they are most often the very first
insight one gets about the source at this angular resolution.
Thefidelityofnonparametricreconstructionsremainslimited
in a photometricsense and cantherefore seldom be used toinfer
astrophysical parameters. In fine, parametric models remain the
choice of reference for estimating astrophysical parameters re-
lated to the very physics of the objects of interest.
It is therefore very likely that even in the yet to come imaging
era of optical interferometry, i.e., when much larger optical
interferometric arrays become operational, the parametric
approach will remain a useful tool for astrophysical modeling,
even though it will no longer be necessary to initialize the
imaging process.
APPENDIX A
THE CLOSURE AND BASELINE OPERATORS
AND
Let be the number of telescopes of a complete interfero-
metric array. We have the following definitions:
(38)
for
inverse
. It is easy to see that
of , defined by
.
. The generalized
, is such that
APPENDIX B
QUADRATIC REGULARIZATION TOWARDS
A PRIOR OBJECT IN OI
A general expression for a quadratic separable regularization
is given by
(39)
where
The default solution
function in the absence of data and subject to the constraints
(normalization and nonnegativity)
, ,otherwisethecriterionisdegenerated.
is obtained by minimizing the cost
(40)
where
ment that all inequality constraints are inactive at the solution,
the Lagrangian for the constrained problem can be written as
means,. Assuming for the mo-
(41)
Minimizing
with respect to only readily yields
. The optimal Lagrange
multiplier
is identified by requiring the normalization of
and, finally, the default solution is
(42)
which is normalized and strictly positive since
ditionallyvalidatesourhypothesisthattheinequalityconstraints
were all inactive at the solution. Combining (39) and (42) yields
the expression of the quadratic, separable, loose support regu-
larization term of (25).
. This ad-
ACKNOWLEDGMENT
The authors would like to thank all the people who con-
tributedtotheexistenceandsuccessoftheIOTAinterferometer.
They also thank the anonymous reviewers for their numerous
suggestions, which resulted in a great improvement of the
paper’s quality.
REFERENCES
[1] A. Quirrenbach, “Optical interferometry,” Annu. Rev. Astron. and As-
trophys., 2001.
[2] J. D. Monnier, “Optical interferometry in astronomy,” Rep. Progr.
Phys., vol. 66, pp. 789–857, May 2003.
[3] J. D. Monnier et al., “Imaging the surface of Altair,” Science, vol. 317,
p. 342, Jul. 2007.
[4] A. D. da Souza et al., “The spinning-top be star achernar from VLTI-
VINCI,” Astron. Astrophys., vol. 407, no. 3, pp. L47–L50, 2003.
[5] W. Jaffe et al., “The central dusty torus in the active nucleus of NGC
1068,” Nature, vol. 429, pp. 47–49, 2004.
[6] A. Poncelet, G. Perrin, and H. Sol, “A new analysis of the nucleus of
NGC 1068 with MIDI observations,” Astron. Astrophys., vol. 450, pp.
483–494, 2006.
[7] K. R. W. Tristram et al., “Resolving the complex structure of the dust
torus in the active nucleus of the circinus galaxy,” Astron. Astrophys.,
vol. 474, pp. 837–850, 2007.
[8] P. R. Lawson, “Notes on the history of stellar interferometry,” in Prin-
ciplesofLongBaselineStellarInterferometry,CourseNotesfrom1999
MichelsonSummerSchool,P.R.Lawson,Ed.
JPL, 2000, pp. 325–32, no. 00-009.
[9] P. R. Lawson et al., “An interferometric imaging beauty contest,” in
NewFrontiersinStellarInterferometry,Proc.SPIEConf.,W.A.Traub,
Ed.Bellingham, WA: SPIE, 2004, vol. 5491, pp. 886–899.
[10] P.R.Lawsonetal.,“The2006interferometryimagingbeautycontest,”
in Advances in Stellar Interferometry, J. D. Monnier, M. SchÖller, and
W. C. Danchi, Eds.Bellingham, WA: SPIE, 2006, vol. 6268, p. 59.
[11] J. W. Goodman, Statistical Optics.
[12] M. Born and E. Wolf, Principles of Optics, 6th ed.
amon, 1993.
[13] J.-M. Mariotti, “Introduction to Fourier optics and coherence,” in
Diffraction-Limited Imaging With Very Large Telescopes, ser. NATO
ASI Series C, D. M. Alloin and J.-M. Mariotti, Eds.
Kluwer, 1989, vol. 274, pp. 3–31.
[14] F. Roddier, “The effects of atmospherical turbulence in optical as-
tronomy,” in Progress in Optics, E. Wolf, Ed.
Netherlands: North Holland, 1981, vol. XIX, pp. 281–376.
Pasadena,CA:NASA-
New York: Wiley, 1985.
New York: Perg-
Norwell, MA:
Amsterdam, The
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 13
LE BESNERAIS et al.: ADVANCED IMAGING METHODS FOR LONG-BASELINE OI 779
[15] F. Roddier, J. M. Gilli, and G. Lund, “On the origin of speckle boiling
and its effects in stellar speckle interferometry,” J. Opt., vol. 13, no. 5,
pp. 263–271, 1982.
[16] J.-M. Conan, G. Rousset, and P.-Y. Madec, “Wave-front temporal
spectra in high-resolution imaging through turbulence,” J. Opt. Soc.
Amer. A, vol. 12, no. 12, pp. 1559–157, Jul. 1995.
[17] D. L. Fried, “Statistics of a geometric representation of wavefront Dis-
tortion,” J. Opt. Soc. Amer., vol. 55, no. 11, pp. 1427–143, 1965.
[18] F. Roddier, Ed., Adaptive Optics in Astronomy.
Cambridge Univ. Press, 1999.
[19] D. Mourard, I. Bosc, A. Labeyrie, L. Koechlin, and S. Saha, “The ro-
tating envelope of the hot star Gamma Cassiopeiae resolved by optical
interferometry,” Nature, vol. 342, pp. 520–522, Nov. 1989.
[20] W. J. Tango and R. Q. Twiss, “Michelson stellar interferometry,”
in Progr. Opt. (A81-13109 03-74) .
North-Holland Publishing, 1980, vol. 17 , pp. 239–277.
[21] R. C. Jennison, “A phase sensitive interferometer technique for the
measurement of the fourier transforms of spatial brightness distribu-
tion of small angular extent,” Monthly Notices Roy. Astron. Soc., vol.
118, pp. 276–284, 1958.
[22] S. Meimon, L. M. Mugnier, and G. Le Besnerais, “A self-calibration
approach for optical long baseline interferometry,” J. Opt. Soc. Amer.
A, accepted for publication.
[23] J. D. Monnier et al., “First results with the IOTA3 imaging interferom-
eter:Thespectrocopicbinaries?virginisandWR140,”apJL,vol.602,
no. 1, pp. L57–L60, Feb. 2004.
[24] J. C. Dainty and A. H. Greenaway, “Estimation of spatial power
spectra in speckle interferometry,” J. Opt. Soc. Amer., vol. 69, no. 5,
pp. 786–79, May 1979.
[25] B. Wirnitzer, “Bispectral analysis at low light levels and astronomical
speckle masking,” J. Opt. Soc. Amer. A, vol. 2, no. 1, pp. 14–2, Jan.
1985.
[26] T. Pauls et al., “A data exchange standard for optical (visible/ir) in-
terferometry,” in New Frontiers in Stellar Interferometry, Proc. SPIE
Conf.. Bellingham, WA: SPIE, 2004, vol. 5491.
[27] G. Perrin, “The calibration of interferometric visibilities obtained with
single-mode optical interferometers. Computation of error bars and
correlations,” Advanc. Appl. Prob., vol. 400, pp. 1173–1181, Mar.
2003.
[28] R. T. Rockafellar, Convex Analysis.
Press, 1996.
[29] D. Hestroffer, “Centre to limb darkening of stars. New model and ap-
plication to stellar interferometry,” A&A, vol. 327, pp. 199–206, Nov.
1997.
[30] A. Manduca, R. A. Bell, and B. Gustafsson, “Limb darkening co-
efficients for late-type giant model atmospheres,” A&A, vol. 61, pp.
809–813, Dec. 1977.
[31] S.Lacouretal.,“Thelimb-darkenedArcturus;ImagingwiththeIOTA/
IONIC interferometer,” ArXiv E-Prints, vol. 804, Apr. 2008.
[32] G. Perrin et al., “Study of molecular layers in the atmosphere of the
supergiant star ? Cep by interferometry in the K band,” A&A, vol. 436,
pp. 317–324, Jun. 2005.
[33] F. R. Schwab, “Optimal gridding of visibility data in radio interfer-
ometry,” in Measurement and Processing for Indirect Imaging, J. A.
Roberts,Ed. Cambridge,U.K.: CambridgeUniv.Press,1984,p.333.
[34] A. Lannes, E. Anterrieu, and K. Bouyoucef, “Fourier interpolation
and reconstruction via Shannon-type techniques. I regularization
principle,” J. Mod. Opt., vol. 41, no. 8, pp. 1537–1574, 1994.
[35] A. Lannes, S. Roques, and Casanove, “Stabilized reconstruction in
signal and image processing: Part i: Partial deconvolution and spectral
extrapolation with limited field,” J. Mod. Opt., vol. 34, pp. 161–226,
1987.
[36] A. Lannes, E. Anterrieu, and P. Maréchal, “Clean and wipe,” Astron.
and Astrophys. Suppl., vol. 123, pp. 183–198, May 1997.
[37] J.-M. Conan, L. M. Mugnier, T. Fusco, V. Michau, and G. Rousset,
“Myopic deconvolution of adaptive optics images using object and
point spread function power spectra,” Appl. Opt., vol. 37, no. 21, pp.
4614–4622, Jul. 1998.
[38] S. Meimon, “Reconstruction d’images astronomiques en inter-
férométrie optique,” Ph.D. dissertation, Université Paris Sud, Paris,
France, 2005.
[39] A. Blanc, L. M. Mugnier, and J. Idier, “Marginal estimation of aber-
rations and image restoration by use of phase diversity,” J. Opt. Soc.
Amer. A, vol. 20, no. 6, pp. 1035–1045, 2003.
Cambridge, U.K.:
Amsterdam, The Netherlands:
Princeton, NJ: Princeton Univ.
[40] L. M. Mugnier, C. Robert, J.-M. Conan, V. Michau, and S. Salem,
“Myopic deconvolution from wavefront sensing,” J. Opt. Soc. Amer.
A, vol. 18, pp. 862–872, Apr. 2001.
[41] W. J. Rey, Introduction to Robust and Quasi-Robust Statistical
Methods. Berlin, Germany: Springer-Verlag, 1983.
[42] S. Brette and J. Idier, “Optimized single site update algorithms for
image deblurring,” in Proc. IEEE ICIP, Lausanne, Switzerland, Sep.
1996, pp. 65–68.
[43] R. Narayan and R. Nityananda, “Maximum entropy image restoration
in astronomy,” Ann. Rev. Astron.Astrophys.,vol. 24, pp. 127–170, Sep.
1986.
[44] D. F. Buscher, “Direct maximum-entropy image reconstruction from
the bispectrum,” in IAU Symp. 158: Very High Angular Resolution
Imaging, J. G. Robertson and W. J. Tango, Eds., 1994, p. 91, –+.
[45] J.-F. Giovannelli and A. Coulais, “Positive deconvolution for superim-
posedextendedsourceandpointsources,”Astron.Astrophys.,vol. 439,
pp. 401–412, 2005.
[46] L. Delage, F. Reynaud, E. Thiebaut, K. Bouyoucef, P. Marechal, and
A. Lannes, “Présentation d’un démonstrateur de synthèse d’ouverture
utilisant des liaisons par fibres optiques,” in Actes du ?? colloque
GRETSI, Grenoble, France, Sep. 1997, pp. 829–832.
[47] A. Lannes, “Weak-phase imaging in optical interferometry,” J. Opt.
Soc. Amer. A, vol. 15, no. 4, pp. 811–82, Apr. 1998.
[48] E. Thiébaut, P. J. V. Garcia, and R. Foy, “Imaging with Amber/VLTI:
The case of microjets,” Astrophys. Space. Sci., vol. 286, pp. 171–176,
2003.
[49] E.Thiébaut,“Mira:Aneffectiveimagingalgorithmforopticalinterfer-
ometry,”presentedattheAstronomicalTelescopesandInstrumentation
SPIE Conf., 2008, paper no. 7013-53, vol. 7013.
[50] E. Thiébaut, “Reconstruction d’image en interférométrie optique,” in
XXIe Colloque GRETSI, Traitement du signal et des images, Troyes,
France, 2007.
[51] E. Thiébaut, “Optimization issues in blind deconvolution algorithms,”
in Astronomical Data Analysis II, Proc. SPIE Conf., J.-L. Starck and
F. D. Murtagh, Eds. Bellingham, WA: SPIE, 2002, vol. 4847, pp.
174–183.
[52] J. Skilling and R. K. Bryan, “Maximum entropy image reconstruc-
tion: General algorithm,” Monthly Not. Roy. Astron. Soc., vol. 211, pp.
111–124, 1984.
[53] S. Meimon, L. M. Mugnier, and G. Le Besnerais, “Reconstruction
method for weak-phase optical interferometry,” Opt. Lett., vol. 30, no.
14, pp. 1809–1811, Jul. 2005.
[54] T. J. Cornwell and P. N. Wilkinson, “A new method for making maps
with unstable radio interferometers,” Monthly Notices Roy. Astron.
Soc., vol. 196, pp. 1067–1086, 1981.
[55] A. Lannes, “Integer ambiguity resolution in phase closure imaging,” J.
Opt. Soc. Amer. J. A, vol. 18, pp. 1046–1055, May 2001.
[56] Y. Bar-Shalom and X.-R. Li, Multitarget-Multisensor Tracking: Prin-
ciples and Techniques.Storrs, CT: Univ. Connecticut, 1995.
[57] S. Meimon, L. M. Mugnier, and G. Le Besnerais, “A convex approxi-
mation of the likelihood in optical interferometry,” J. Opt. Soc. Amer.
A, Nov. 2005.
[58] J.-P. Berger et al., “An integrated-optics 3-way beam combiner for
IOTA,” in Interferometry for Optical Astronomy II—Proc. SPIE, W.
A. Traub, Ed., Feb. 2003, vol. 4838, pp. 1099–1106.
[59] F. Van Leeuwen, Hipparcos, The New Reduction of the Raw Data.
New York: Springer, 2007.
GuyLeBesneraiswasborninParis,France,in1967.
HegraduatedfromtheÉcoleNationaleSupérieurede
Techniques Avancées in 1989 and received the Ph.D.
degree in physics from the Université de Paris-Sud,
Orsay, France, in 1993.
Since 1994, he has been with the Office National
d’Études et Recherches Aérospatiales, Châtillon,
France. His main interests are in the fields of image
reconstruction and spatio-temporal processing of
image sequences. He made various contributions in
optical interferometry, super-resolution, optical-flow
estimation, and 3-D reconstruction from aerial image sequence.
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
Page 14
780IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 2, NO. 5, OCTOBER 2008
Sylvestre Lacour received the Ph.D. degree in astro-
physics in 2007, specializing in astronomical instru-
mentation, mainly involving interferometry and high
angular resolution.
He is a tenured Assistant Researcher at CNRS.
He is working within the National Institute for
Earth Sciences and Astronomy, Meudon, France.
His astrophysical interests involve the interstellar
medium, evolved stars, and extrasolar planets.
Laurent M. Mugnier graduated from Ecole Poly-
technique, France, in 1988. He received the Ph.D.
degree in 1992 from Ecole Nationale Supérieure des
Télécommunications (ENST), France, for his work
onthedigitalreconstructionofincoherent-lightholo-
grams.
In 1994 he joined ONERA, where he is currently a
Senior Research Scientist in the field of inverse prob-
lemsandhigh-resolutionopticalimaging.Hiscurrent
research interests include image reconstruction and
wavefront-sensing, in particular for adaptive-optics
corrected imaging through turbulence, for retinal imaging, for Earth observa-
tion and for optical interferometry in astronomy. His publications include five
contributions to reference books and 30 papers in peer-reviewed international
journals.
Eric Thiébaut was born in Béziers, France, in 1966.
He graduated from the École Normale Supérieure in
1987 and received the Ph.D. degree in astrophysics
from the Université Pierre and Marie Curie de Paris
VII, Paris, France, in 1994.
Since 1995, he has been an Astronomer at the
Centre de Recherche Astrophysique de Lyon,
France. His main interests are in the fields of signal
processing and image reconstruction. He has made
various contributions in blind deconvolution, optical
interferometry, and optimal detection.
Guy Perrin was born in Saint-Etienne, France, in
1968. He graduated from École Polytechnique in
1992 and received the Ph.D. degree in astrophysics
from Université Paris Diderot in 1996.
He has been an Astronomer with Observatoire de
Parissince1999.Hisresearchtopicsfocusbothonin-
strumentaltechniques for high angular resolution ob-
servations and on the use of interferometers and tele-
scopes equipped with adaptive optics to study point-
like objects such as evolved stars, active galactic nu-
clei, and the Galactic Center.
SergeMeimonwasborninParis,France,in1978.He
graduatedfromÉcoleCentraledeNantesin2002and
received the Ph.D. degree in physics from Université
Paris-Sud Orsay in 2005 for his work on image-re-
construction methods in optical interferometry.
Since then, he has been with the Office National
d’Études et Recherches Aérospatiales, Châtillon,
France. He has made various contributions in the
field of optical high-angular resolution.
Authorized licensed use limited to: IEEE Xplore. Downloaded on January 6, 2009 at 11:43 from IEEE Xplore. Restrictions apply.
View other sources
Hide other sources
- Available from Laurent M Mugnier · Jun 2, 2014
- Available from free.fr