Conference PaperPDF Available

Solving Incrementally the Fitting and Detection Problems in fMRI Time Series

Authors:

Abstract

We tackle the problem of real-time statistical analysis of functional magnetic resonance imaging (fMRI) data. In a recent paper, we proposed an incremental algorithm based on the extended Kalman filter (EKF) to fit fMRI time series in terms of a general linear model with autoregressive errors (GLM-AR model). We here improve the technique using a new Kalman filter variant specifically tailored to the GLM-AR fitting problem, the Refined Kalman Filter (RKF), that avoids both the estimation bias and initialization issues typical from the EKF, at the price of increased memory load. We then demonstrate the ability of the method to perform online analysis on a “functional calibration” event-related fMRI protocol.
Solving Incrementally the Fitting and Detection
Problems in fMRI Time Series
Alexis Roche, Philippe Pinel, Stanislas Dehaene, and Jean-Baptiste Poline
CEA, Service Hospitalier Fed´eric Joliot, Orsay, France
Institut d’Imagerie Neurofonctionnelle (IFR 49), Paris, France
roche@shfj.cea.fr
Abstract. We tackle the problem of real-time statistical analysis of
functional magnetic resonance imaging (fMRI) data. In a recent paper,
we proposed an incremental algorithm based on the extended Kalman fil-
ter (EKF) to fit fMRI time series in terms of a general linear model with
autoregressive errors (GLM-AR model). We here improve the technique
using a new Kalman filter variant specifically tailored to the GLM-AR
fitting problem, the Refined Kalman Filter (RKF), that avoids both the
estimation bias and initialization issues typical from the EKF, at the
price of increased memory load. We then demonstrate the ability of the
method to perform online analysis on a “functional calibration” event-
related fMRI protocol.
1 Introduction
One of the current challenges in functional magnetic resonance imaging (fMRI)
is to display reconstructed volumes and map brain activations in real time during
an ongoing scan. This will make it possible to interact with fMRI experiments
in a much more efficient way, either by monitoring acquisition parameters online
depending on subject’s performance, or by designing paradigms that incorporate
neurophysiological feedback. To date, the feasibility of real-time fMRI process-
ing has been limited by the computational cost of both the three-dimensional
reconstruction of MR scans and their statistical analysis.
This paper addresses the latter item, and is therefore focused on the feasibility
of online fMRI statistical analysis. In this context, our goal is to fit, on each scan
time, the currently available fMRI time course in terms of an appropriate model
of the BOLD response, and further test for brain regions that are significantly
correlated with the model. We will focus here on general linear models (GLM)
[1] as they are by far the most common in the fMRI processing community.
Although many detection algorithms have been proposed so far, most of them
are intended to work offline in the sense that they process a complete fMRI se-
quence once at a time, with computational cost and memory load proportional to
the sequence length. Applying such methods online would imply that the incre-
mental computation time increases on each new scan, which is clearly a serious
drawback when considering real-time constraints. To overcome this problem,
some techniques were proposed that compute the correlation between the signal
C. Barillot, D.R. Haynor, and P. Hellier (Eds.): MICCAI 2004, LNCS 3217, pp. 719–726, 2004.
c
Springer-Verlag Berlin Heidelberg 2004
720 A. Roche et al.
and the model, either in an incremental fashion [2], or by restricting computa-
tions to a sliding time window [3].
Such methods have the ability to process each new scan in a constant amount
of time, but, being based on standard correlation, they work under the implicit
assumption that the errors in the signal are temporally uncorrelated. Should
this assumption be incorrect, the significance level of activation clusters may
be substantially biased (over- or under-estimated). The importance of correct-
ing inferences for temporal autocorrelations is widely recognized owing to the
following facts: (i) errors may be found to be severely autocorrelated in some
regions, especially when the model lacks flexibility; (ii) since autocorrelation is
spatially dependent, it cannot be accounted for by a global threshold correction.
We recently advocated Kalman filtering techniques as good candidates for
online fMRI analysis [4]. In its standard form, the Kalman filter is an incremental
solver for ordinary least-square (OLS) regression problems, and is therefore well-
suited for GLM fitting when assuming uncorrelated errors. In the more general
case where the noise autocorrelation is unknown and is therefore to be estimated,
the regression problem becomes nonlinear, a situation that may be handled using
an extended Kalman filter (EKF) [4]. This technique’s main drawback is that it
requires parameter initialization to work; we observed from practical experience
that good initialization is difficult to tune, and is very much machine-dependent.
To work around these issues, we design here a new Kalman filter variant
to solve the GLM-AR fitting problem incrementally. Rather than using the lin-
earization mechanism underlying the EKF, our basic idea is to rely on the stan-
dard Kalman filter to provide first parameter guesses on each iteration, and then
refine the result using a simple optimization scheme. We will show that the algo-
rithm outperforms the EKF in that it is insensitive to initialization, and provides
asymptotically unbiased parameter estimates.
2 GLM-AR Model Fitting
Let us consider the time course vector y=[y1,... ,y
n]tassociated with a given
voxel in an fMRI sequence, where the acquisition times are numbered from 1
to n. In the remainder, it will be assumed that each incoming scan is spatially
aligned with the first scan, which may necessitate a realignment procedure. In
our usual processing pipeline, no spatial filtering is applied to the original scans
(this enables us to use a slice-specific model to account for slice timing effects).
2.1 The GLM-AR Model
The general linear model states that the measured time course is a linear com-
bination of known signals x1,... ,xpcalled regressors, up to an additive noise:
y= +ε,
where X(xt
1,... ,xt
p)isan×pmatrix called the design matrix, which
concatenates the different regressors columnwise, εis the outcome of the noise,
Solving Incrementally the Fitting and Detection Problems 721
and βis the unknown p×1 vector of regression coefficients, or “effect” vector. The
design matrix contains paradigm-related regressors obtained, e.g., by convolving
the different stimulation onsets with a canonical hemodynamic response function
[1], as well as regressors that model the low-frequency drift, hence enabling us
to “detrend” the signal (we use polynomials up to order three). Notice that
the design matrix can be assembled incrementally since it involves either causal
convolutions, or pre-specified detrending functions.
In this work, we assume that εis a stationary Gaussian zero-mean AR(1)
random process, i.e. it is characterized by: εi=i1+ni, where ais the auto-
correlation parameter, and niis a “generator” white noise, with instantaneous
Gaussian distribution N(0
2). Notice that the condition |a|<1 must hold for
the AR noise to be stationary.
2.2 Offline Fitting
Solving the GLM-AR fitting problem means finding appropriate, somehow opti-
mal, statistical estimators of the effect β, the noise autocorrelation aand scale
parameter σ. A powerful estimation approach consists of maximizing the likeli-
hood function or, equivalently, minimizing its negated logarithm given by [4]:
L(β,a)=nlog 2πσ +1
2log(1 a2)+ 1
2σ2(1 a2)r2
1+
n
i=2
(riari1)2,
(1)
where riyixt
iβdenotes the residual at time i, and is a function of βonly.
There is no closed-form solution to the minimization of equation (1), except
when ais considered known beforehand, hence kept constant, in which case the
problem boils down to traditional OLS regression. Based on this remark, maxi-
mum likelihood estimation may be implemented using an alternate optimization
scheme, ensuring locally optimal parameter estimates [4]. However, because each
iteration involves assembling and inverting a p×pmatrix, it may be hopelessly
time consuming when dealing with large models. Alternative estimation strate-
gies include pre-coloring [1], pre-whitening [5], bias-corrected OLS estimation
[6], restricted maximum likelihood [7], and variational Bayesian techniques [8].
2.3 Online Fitting: The Refined Kalman Filter
In real-time context, we aim to solve the fitting problem each time a new mea-
surement is available, i.e., at time i, process the partial sequence (y1,y
2,... ,y
i)
as if it was the complete one. We discussed in section 1 the need for specific tech-
niques to achieve such incremental analysis. We present here the refined Kalman
filter (RKF) as an alternative to previous online fitting techniques [2,3,4].
The online estimation problem may be formulated in terms of maximizing
the likelihood function (1) as applied to the sequence available at time i.For
722 A. Roche et al.
better computational tractability, we will however consider a slightly modified
version of the likelihood criterion:
˜
Li(β,a)=ilog 2πσ +1
σ2Ci(β,a) (2)
with Ci(β,a)=(1+a2)1
2
i
k=1
r2
k(β)

C0
i(β)
2γia1
2
i
k=2
rk(β)rk1(β)

C1
i(β)
,
where we define γii/(i1). It may be shown that this modified likelihood is
asymptotically equivalent to the genuine likelihood in the sense that the average
difference (˜
LiLi)/i converges uniformly towards zero (on any bounded open
set) as iapproaches infinity. Therefore, the minimizers of (2) inherit the general
maximum likelihood property of being asymptotically unbiased. Notice that for
the parameter β, the property holds not only asymptotically, but for any sample
size. We introduce the correction factor γito further reduce the estimation bias
on aand σ. The RKF principles then arise from the following remarks:
From equation (2), we observe that the estimation of σmay be completely
decoupled from that of (β,a); clearly, the optimal scale is determined from the
minimum of Ciby: σ2
i=(2/i) minβ,a Ci(β,a).
The criterion Ci(β,a) is a weighted sum of two functions of βonly, C0
i(β)
and C1
i(β), the first of which is the classical OLS criterion, and may be calculated
incrementally using a standard Kalman filter. A similar incremental calculation
may be used for the second term C1
i(β) as it is also quadratic.
From the calculation of both C0
i(β) and C1
i(β), an alternate minimization
scheme similar to that described in section 2.2 can be used to iteratively estimate
the autocorrelation a, and refine the OLS estimate of β.
The RKF algorithm is detailed in table 1, and commented here below.
Standard Kalman iterations. The Kalman filter is used to incrementally update
the OLS criterion C0
idefined in equation (2) so as to provide a starting guess of β
on each scan time. One motivation for this strategy is that the OLS estimator is
at least unbiased despite it is not optimal for the GLM-AR model [1,6]. On each
scan time i, the Kalman filter updates the minimizer β0
iof C0
i, the minimum
criterion value c0
iC0
i(βi), as well as its inverse Hessian S0
i. Since the Hessian
H0
iis later needed in the refinement loop, we also update its value recursively
in order to avoid inverting S0
i.
Refinement loop. After performing one Kalman iteration, we update the “cor-
rection” function C1
i(β) involved in equation (2), which is quadratic, hence fully
specified by its derivatives up to order two. Let ci
1Ci
1(β0
i), g1
i∂Ci
1/∂β(β0
i)
and H1
i2Ci
1/∂β2denote respectively the function value, gradient and Hes-
sian computed at the current OLS estimate β0
i. Those quantities are easily re-
lated to their previous values using equation (3) in table 1.
Solving Incrementally the Fitting and Detection Problems 723
At the stage where both C0
i(β) and C1
i(β) are calculated, it becomes pos-
sible to minimize Ci(β,a) as defined in equation (2), which is the actual es-
timation criterion we are interested in. To that end, we perform an alternate
minimization of Ci(β,a). When βiis held fixed, the optimal autocorrelation is
clearly given by ai=γiC1
i(βi)/C0
i(βi). On the other hand, when aiis fixed,
re-estimating βamounts to minimizing the sum of two quadratic functions,
yielding a closed-form solution given by equation (4) in table 1. The formula
involves Si(2Ci/∂β2)1, the inverse Hessian of Ci(β,a) w.r.t. β, which is
a function of aionly as it is independent of βi. This matrix plays a key role at
the detection stage as it closely relates to the covariance of βi(see section 2.4).
Comparison with EKF. The key feature of the RKF is that its incremental
updates do not involve any approximation, unlike the EKF [4] which proceeds
by successive linearizations. This property is achieved exploiting the specific form
of the estimation criterion (2), and jointly updating the two quadratic functions
C0
i(β) and C1
i(β) exactly. Hence, the data information is fully preserved by
the RKF, whereas it is unavoidably degraded across iterations using an EKF.
0 10 20 30 40 50 60 70 80 90 100
165
170
175
180
185
190
195
200
205
210 data
RKF
EKF
ML
0 10 20 30 40 50 60 70 80 90 100
165
170
175
180
185
190
195
200
205
210 data
RKF
EKF
ML
Fig. 1. Comparative fitting example using the RKF and the EKF in an event-related
paradigm (see section 3). From left to right, RKF (dark curve) and EKF (bright curve)
results using respectively 1 and 2 local iterations are compared with the maximum like-
lihood result (black curve) computed using the offline algorithm described in section 2.2.
In this case, the model contains 15 regressors.
2.4 Online Detection
On each scan time ithe RKF provides a current estimate βiof the effect in each
voxel. However, to test whether the effect is significant, we also need to evaluate
some kind of measure of uncertainty on this estimate. Based on the remark
that our estimation criterion is an asymptotically valid likelihood function (see
section 2.3), its inverse “Fisher information” is a natural approximation of the
variance matrix of βi: Var(βi)(2˜
Li
β2)1=σ2
iSi,where σiis the current scale
estimate, and Siis the inverse Hessian defined in section 2.3.
724 A. Roche et al.
Table 1. Refined Kalman Filter (RKF) synopsis.
Initialize with: β0
0=0,H0
0=0p,S0
0=λIp, with λlarge enough (e.g. λ=10
10).
For i∈{1,2,... ,n},
1. Update the OLS estimate (standard Kalman iteration). Compute the auxiliary variables:
ρi=yixt
iβ0
i1,ki=S0
i1xi,vi=xt
iki, in order to perform the following recursion:
β0
i=β0
i1+β0
iwith β0
i=ρi
vi
ki
S0
i=S0
i11
vi
kikt
i
H0
i=H0
i1+xixt
i
c0
i=c0
i1+ρ2
i
2vi
2. Compute the value, gradient and Hessian of C1
i(β) at the new OLS estimate β0
i. Using the
residuals ri=yixt
iβ0
iand ri1=yi1xt
i1β0
i, do:
c1
i=c1
i1+(g1
i1)tβ0
i+1
2(β0
i)tH1
i1β0
i+1
2riri1
g1
i=g1
i1+H1
i1β0
i1
2(ri1xi+rixi1)
H1
i=H1
i1+1
2(xixt
i1+xi1xt
i) (3)
3. Refinement loop. Initialize: βi=β0
iand Si=S0
i, then repeat the following two-pass
routine a fixed number of times:
Estimate the autocorrelation, using the values of C0
i(β) and C1
i(β) at the current
estimate βi, and the deviation from the OLS estimate βi=βiβ0
i,
˜c0
i=c0
i+1
2βt
iH0
iβi
˜c1
i=c1
i+(g1
i)tβi+1
2βt
iH1
iβi
ai=γi
˜c1
i
˜c0
i
Refine βiand the inverse Hessian Si,
Si=1
1+a2
i
(Ip+2γiai
1+a2
i
S0
iH1
i)S0
i
βi=β0
i+2γiaiSig1
i(4)
4. Estimate the scale:
σ2
i= 2(1 a2
i)˜c0
i
i
Given a contrast vector c, we are interested in identifying the voxels that
show a contrasted effect ctβ, for instance, significantly positive. As a first-order
approximation, we may assume that the effect’s estimate is normally distributed
around the true, unknown effect β, i.e. βiN(β
2
iSi). Hence, under the
null hypothesis that ctβ= 0, the statistic:
zi= Var(ctβi)1
2ctβi=σ1
i(ctSic)1
2ctβi,
Solving Incrementally the Fitting and Detection Problems 725
defines a z-score. Testing for positive activations may thus be achieved at any
time iby thresholding the image of z-scores. Notice that this approach may
also be interpreted in a Bayesian perspective [4]. As is standard in practice, we
apply some spatial Gaussian smoothing to the z-score image before thresholding
to improve the localization power of detection [1]. We usually set the threshold
so as to match an uncorrected p-value of 103, although this should ideally be
corrected for multiple comparisons.
3 Results
The method was tested offline on several fMRI datasets acquired on our site from
both GE Signa 1.5T and Bruker 3T whole-body scanners, always providing final
results consistent with SPM’99. For illustration, we present here a “functional
calibration” protocol designed to localize the main brain functions in about five
minutes. The experimental paradigm contains 11 different conditions (labelled as
’visual’, ’motor’, ’calculation’ and ’language’), from which a total of 100 events
are presented pseudo-randomly to the subject. The data was acquired on the
Bruker 3T scanner using a 3s repetition time, for a total of 100 scans with
64 ×64 ×26 voxels of size 3.75 ×3.75 ×4.5mm
3.
Fig. 2. Incremental detection of visual and auditory regions in a functional calibration
paradigm. From left to right, activation maps after respectively 2’00”, 3’30” and 5”00”.
The RKF algorithm was applied offline to the fMRI sequence. One regressor
was associated with each condition by convolving its onsets with a canonical
hemodynamic response function [1]. Three additional polynomial regressors were
726 A. Roche et al.
used to model the low frequency drifts present in the signal. The number of
iterations in the refinement loop was set to three. Using a C implementation,
the computation time to process each time frame was about two tenth of second
on a standard PC (1.80GHz processor).
The activation maps in figure 2 show in an axial slice the regions that were
detected respectively after 40 (2’00”), 70 (3’30”) and 100 (5’00”) scans, for a
contrast between visual and auditory sentences. As expected, positive effects are
found laterally in the occipital lobe where is the visual cortex (top row), while
negative effects are found in the temporal lobes (bottom row). After 2’00”, no
significant visual region is detected in this slice, whereas auditory regions are
already appearing. Larger clusters are found after 3’30” without major changes
until the end of the sequence. We notice a subtle loss of sensitivity in the right
temporal lobe, which might be explained either by a late motion of the subject,
or by a neuronal adaptation effect. Although rather qualitative, these results
demonstrate the potential use of real-time fMRI, suggesting that functional re-
gions may be detected significantly before the end of an experiment.
4 Conclusion
We have improved our previous incremental, EKF-based detection method for
fMRI time series by designing an original Kalman variant called the refined
Kalman filter (RKF). The new method achieves excellent statistical perfor-
mances without requiring any initialization parameter unlike the EKF and clas-
sical variants such as the second-order EKF or the unscented Kalman filter [9].
The price to pay is essentially increased memory load, as the RKF tends to run
even slightly faster than the EKF.
References
1. Friston, K.J.: 2. In: Human Brain Function. Academic Press (1997) 25–42
2. Cox, R., Jesmanowicz, A., Hyde, J.: Real-Time Functional Magnetic Resonance
Imaging. Magnetic Resonance in Medicine 33 (1995) 230–236
3. Gembris, D., Taylor, J., Schor, S., Frings, W., Suter, D., Posse, S.: Functional Mag-
netic Resonance Imaging in Real Time (FIRE). Magnetic Resonance in Medicine
43 (2000) 259–268
4. Roche, A., Lahaye, P.J., Poline, J.B.: Incremental activation detection in fmri series
using kalman filtering. In: Proc. 2st Proc. IEEE ISBI, Arlington, VA (2004) 376–379
5. Woolrich, M., Ripley, B., Brady, M., Smith, S.: Temporal autocorrelation in uni-
variate linear modelling of fMRI data. Neuroimage 14 (2001) 1370–1386
6. Worsley, K., Liao, C., Aston, J., Petre, V., Duncan, G., Morales, F., Evans, A.: A
general statistical analysis for fMRI data. Neuroimage 15 (2002) 1–15
7. Friston, K., Penny, W., Phillips, C., Kiebel, S., Hinton, G., Ashburner, J.: Classical
and Bayesian Inference in Neuroimaging: Theory. NeuroImage 16 (2002) 465–483
8. Penny, W., Kiebel, S., Friston, K.: Variational Bayesian Inference for fMRI time
series. NeuroImage (2003) In press.
9. Julier, S., Uhlmann, J.: A new extension of the kalman filter to nonlinear systems.
In: Int. Symp. Aerospace/Defense Sensing, Simul. and Controls. (1997)
... Next the realigned runs were fitted by voxel-wise linear models predicting both the BOLD response to the experimental paradigm, using the canonical hemodynamic impulse response model [3], and the low frequency drifts present in the signal, using discrete cosine functions with cutoff frequency 128 Hz. Linear modeling was carried out using the NiPy package rather than SPM to enable fitting autoregressive AR1 noise models with voxel-dependent autocorrelation parameters [22]. No spatial smoothing was applied to the data. ...
Article
Full-text available
Existing groupwise image registration algorithms for longitudinal data generally ignore continuous movements and signal changes that occur throughout image acquisition. We emphasize the case of functional magnetic resonance images, which present spatio-temporal distortion due to the combination of head motion during scanning and staggered slice acquisition. While there exist techniques to correct for motion and slice timing separately, a common dilemma is to determine which correction should be applied first. This paper proposes a four-dimensional realignment algorithm to perform both tasks simultaneously. Experiments conducted on simulated datasets with known movements suggest that the proposed algorithm provides more accurate image reconstruction than the classical two-step realignment procedure (temporal then spatial) as implemented, for instance, in the statistical parametric mapping software.
... Then, according to the acquired number of orientations, either DTI model, QBI model or both can be obtained. To our knowledge, real-time processing was previously addressed for BOLD functional imaging (Roche et al., 2004) and EEG-fMRI fusion (Deneux and Faugeras, 2006), but has never been proposed for diffusion imaging. DTI and QBI models can be expressed in the light of the general linear model framework (GLM) assuming a white noise model. ...
Article
Full-text available
Magnetic resonance diffusion imaging (dMRI) has become an established research tool for the investigation of tissue structure and orientation. In this paper, we present a method for real time processing of diffusion tensor and Q-ball imaging. The basic idea is to use Kalman filtering framework to fit either the linear tensor or Q-ball model. Because the Kalman filter is designed to be an incremental algorithm, it naturally enables updating the model estimate after the acquisition of any new diffusion-weighted volume. Processing diffusion models and maps during ongoing scans provides a new useful tool for clinicians, especially when it is not possible to predict how long a subject may remain still in the magnet.
Conference Paper
Magnetic resonance diffusion imaging (dMRI) has become an established research tool for the investigation of tissue structure and orientation. In this paper, we present a method for real time processing of diffusion tensor and Q-ball imaging. The basic idea is to use Kalman filtering framework to fit either the linear tensor or Q-ball model. Because the Kalman filter is designed to be an incremental algorithm, it naturally enables updating the model estimate after the acquisition of any new diffusion-weighted volume. Processing diffusion models and maps during ongoing scans provides a new useful tool for clinicians, especially when it is not possible to predict how long a subject may remain still in the magnet.
Article
Low-frequency drift in fMRI datasets can be caused by various sources and are generally not of interest in a conventional task-based fMRI experiment. This feature complicates the assimilation approach that is always under specific assumption on statistics of system uncertainties. In this paper, we present a novel approach to the assimilation of nonlinear hemodynamic system with stochastic biased noise. By treating the drift variation as a random-walk process, the assimilation problem was translated into the identification of a nonlinear system in the presence of time-varying bias. We developed a bias aware unscented Kalman estimator to efficiently handle this problem. In this framework, the estimates of bias-free states and drift are separately carried out in two parallel filters, the optimal estimates of the system states then are corrected from bias-free states with drift estimates. The approach can simultaneously deal with the fMRI responses and drift in an assimilation cycle in an on-line fashion. It makes no assumptions of the structure and statistics of the drift, thereby is particularly suited for fMRI imaging where the formulation of real drift remains difficult to acquire. Experiments with synthetic data and real fMRI data are performed to demonstrate feasibility of our approach and to explore its potential advantages over classic polynomial approach. Moreover, we include the comparison of the variability of observables from the scanner and of normalized signal used in assimilation procedure in Appendix.
Article
Diffusion magnetic resonance imaging (dMRI) has become an established research tool for the investigation of tissue structure and orientation. In this paper, we present a method for real-time processing of diffusion tensor and Q-ball imaging. The basic idea is to use Kalman filtering framework to fit either the linear tensor or Q-ball model. Because the Kalman filter is designed to be an incremental algorithm, it naturally enables updating the model estimate after the acquisition of any new diffusion-weighted volume. Processing diffusion models and maps during ongoing scans provides a new useful tool for clinicians, especially when it is not possible to predict how long a subject may remain still in the magnet. First, we introduce the general linear models corresponding to the two diffusion tensor and analytical Q-ball models of interest. Then, we present the Kalman filtering framework and we focus on the optimization of the diffusion orientation sets in order to speed up the convergence of the online processing. Last, we give some results on a healthy volunteer for the online tensor and the Q-ball model, and we make some comparisons with the conventional offline techniques used in the literature. We could achieve full real-time for diffusion tensor imaging and deferred time for Q-ball imaging, using a single workstation.
Conference Paper
Full-text available
Diffuse optical tomography (DOT) is a noninvasive imaging technology that is sensitive to local concentration changes in oxy- and deoxyhemoglobin. When applied to functional neuroimaging, DOT measures hemodynamics in the scalp and brain that reflect competing metabolic demands and cardiovascular dynamics. Separating the effects of systemic cardiovascular regulation from the local dynamics is vitally important in DOT analysis. In this paper, we use auxiliary physiological measurements such as blood pressure and heart rate within a Kalman filter framework to model physiological components in DOT. We validate the method on data from a human subject with simulated local hemodynamic responses added to the baseline physiology. The proposed method significantly improved estimates of the local hemodynamics in this test case. Cardiovascular dynamics also affect the blood oxygen dependent (BOLD) signal in functional magnetic resonance imaging (fMRI). This Kalman filter framework for DOT may be adapted for BOLD fMRI analysis and multimodal studies.
Book
This updated second edition provides the state of the art perspective of the theory, practice and application of modern non-invasive imaging methods employed in exploring the structural and functional architecture of the normal and diseased human brain. Like the successful first edition, it is written by members of the Functional Imaging Laboratory - the Wellcome Trust funded London lab that has contributed much to the development of brain imaging methods and their application in the last decade. This book should excite and intrigue anyone interested in the new facts about the brain gained from neuroimaging and also those who wish to participate in this area of brain science.
Article
A recursive algorithm suitable for functional magnetic resonance imaging (FMRI) calculations is presented. The correlation coefficient of a time course of images with a reference time series, with the mean and any linear trend projected out, may be computed with 22 operations per voxel, per image; the storage overhead is four numbers per voxel. A statistical model for the FMRI signal is presented, and thresholds for the correlation coefficient are derived from it. Selected images from the first real-time functional neuroimaging experiment (at 3 Tesla) are presented. Using a 50-MHz workstation equipped with a 14-bit analog-to-digital converter, each echo planar image was acquired, reconstructed, correlated, thresh-olded, and displayed in pseudocolor (highlighting active regions in the brain) within 500 ms of the RF pulse.
Article
New algorithms for correlation analysis are presented that allow the mapping of brain activity from functional MRI (fMRI) data in real time during the ongoing scan. They combine the computation of the correlation coefficients between measured fMRI time-series data and a reference vector with “detrending,” a technique for the suppression of non-stimulus-related signal components, and the “sliding-window technique.” Using this technique, which limits the correlation computation to the last N measurement time points, the sensitivity to changes in brain activity is maintained throughout the whole experiment. For increased sensitivity in activation detection a fast and robust optimization of the reference vector is proposed, which takes into account a realistic model of the hemodynamic response function to adapt the parameterized reference vector to the measured data. Based on the described correlation method, real-time fMRI experiments using visual stimulation paradigms have been performed successfully on a clinical MR scanner, which was linked to an external workstation for image analysis. Magn Reson Med 43:259–268, 2000. © 2000 Wiley-Liss, Inc.
Article
In functional magnetic resonance imaging statistical analysis there are problems with accounting for temporal autocorrelations when assessing change within voxels. Techniques to date have utilized temporal filtering strategies to either shape these autocorrelations or remove them. Shaping, or "coloring," attempts to negate the effects of not accurately knowing the intrinsic autocorrelations by imposing known autocorrelation via temporal filtering. Removing the autocorrelation, or "prewhitening," gives the best linear unbiased estimator, assuming that the autocorrelation is accurately known. For single-event designs, the efficiency of the estimator is considerably higher for prewhitening compared with coloring. However, it has been suggested that sufficiently accurate estimates of the autocorrelation are currently not available to give prewhitening acceptable bias. To overcome this, we consider different ways to estimate the autocorrelation for use in prewhitening. After high-pass filtering is performed, a Tukey taper (set to smooth the spectral density more than would normally be used in spectral density estimation) performs best. Importantly, estimation is further improved by using nonlinear spatial filtering to smooth the estimated autocorrelation, but only within tissue type. Using this approach when prewhitening reduced bias to close to zero at probability levels as low as 1 x 10(-5).
Article
We propose a method for the statistical analysis of fMRI data that seeks a compromise between efficiency, generality, validity, simplicity, and execution speed. The main differences between this analysis and previous ones are: a simple bias reduction and regularization for voxel-wise autoregressive model parameters; the combination of effects and their estimated standard deviations across different runs/sessions/subjects via a hierarchical random effects analysis using the EM algorithm; overcoming the problem of a small number of runs/session/subjects using a regularized variance ratio to increase the degrees of freedom.
Article
This paper reviews hierarchical observation models, used in functional neuroimaging, in a Bayesian light. It emphasizes the common ground shared by classical and Bayesian methods to show that conventional analyses of neuroimaging data can be usefully extended within an empirical Bayesian framework. In particular we formulate the procedures used in conventional data analysis in terms of hierarchical linear models and establish a connection between classical inference and parametric empirical Bayes (PEB) through covariance component estimation. This estimation is based on an expectation maximization or EM algorithm. The key point is that hierarchical models not only provide for appropriate inference at the highest level but that one can revisit lower levels suitably equipped to make Bayesian inferences. Bayesian inferences eschew many of the difficulties encountered with classical inference and characterize brain responses in a way that is more directly predicated on what one is interested in. The motivation for Bayesian approaches is reviewed and the theoretical background is presented in a way that relates to conventional methods, in particular restricted maximum likelihood (ReML). This paper is a technical and theoretical prelude to subsequent papers that deal with applications of the theory to a range of important issues in neuroimaging. These issues include; (i) Estimating nonsphericity or variance components in fMRI time-series that can arise from serial correlations within subject, or are induced by multisubject (i.e., hierarchical) studies. (ii) Spatiotemporal Bayesian models for imaging data, in which voxels-specific effects are constrained by responses in other voxels. (iii) Bayesian estimation of nonlinear models of hemodynamic responses and (iv) principled ways of mixing structural and functional priors in EEG source reconstruction. Although diverse, all these estimation problems are accommodated by the PEB framework described in this paper.
Article
We describe a Bayesian estimation and inference procedure for fMRI time series based on the use of General Linear Models with Autoregressive (AR) error processes. We make use of the Variational Bayesian (VB) framework which approximates the true posterior density with a factorised density. The fidelity of this approximation is verified via Gibbs sampling. The VB approach provides a natural extension to previous Bayesian analyses which have used Empirical Bayes. VB has the advantage of taking into account the variability of hyperparameter estimates with little additional computational effort. Further, VB allows for automatic selection of the order of the AR process. Results are shown on simulated data and on data from an event-related fMRI experiment.