ArticlePDF Available

Akaike Causality in State Space Part I- Instantaneous Causality Between Visual Cortex in fMRI Time Series

Authors:

Abstract and Figures

We present a new approach of explaining instantaneous causality in multivariate fMRI time series by a state space model. A given single time series can be divided into two noise-driven processes, a common process shared among multivariate time series and a specific process refining the common process. By assuming that noises are independent, a causality map is drawn using Akaike noise contribution ratio theory. The method is illustrated by an application to fMRI data recorded under visual stimulation.
Content may be subject to copyright.
Akaike Causality in State Space
Part I - Instantaneous Causality Between Visual
Cortex in fMRI Time Series
K.F. Kevin Wong, Tohru Ozaki
December 21, 2006
Abstract
We present a new approach of explaining partial causality in mul-
tivariate fMRI time series by a state space model. A given single time
series can be divided into two noise-driven processes, which comprising
a homogeneous process shared among multivariate time series and a
particular process refining the homogeneous process. Causality map is
drawn using Akaike noise contribution ratio theory, by assuming that
noises are independent. The method is illustrated by an application to
fMRI data recorded under visual stimulus.
Keywords: Akaike causality, noise contribution ratio, state space model,
common source, partial causality, functional MRI, primary visual cor-
tex, middle temporal cortex, posterior parietal cortex.
1 Introduction
For the purpose of causality analysis in multivariate time series data Akaike
(1968) decomposes power spectral density into components, each coming
from an independent noise of multivariate autoregressive model (VAR). Con-
troversy on Akaike noise contribution ratio (NCR) causality mainly focuses
on the validity of causality when residuals are spatially highly correlated,
that phenomenon can be reflected from large covariance entry in noise covari-
ance matrix. When driving noises have a high correlation instantaneously,
independence assumption of noise is not adequate, a non-zero noise covari-
ance is essential to improve the time series model. This indispensable co-
variance suggests that two corresponding time series are driven by similar
noises, which apparently showing causal relationship instantaneously from
one to the other, without a clue who is causing whom.
The causality being discussed is known to be instantaneous causality.
Geweke (1982) tested the likelihood ratio in order to decide the significance
of instantaneous causality. One deficiency is an unclear cut of causality when
1
the model order is getting high, so that feedback of instantaneous causality
through autoregressive process occurs and instantaneous causality plays a
role more than just instantaneously.
We propose an alternative way to look at the instantaneous causality by
state space model (Wong, 2005). In particular, we assume, instead of in-
directed causality between two variables, a directed causality from a latent
variable to the two variables. We will model the fMRI by a linear autore-
gressive model plus a homogeneous variable in a state space framework.
2 fMRI data under visual stimulus
The data selected as an example to illustrate our new method is obtained
from a recent research of Yamashita et al. (2005). The time series of BOLD
signal of a healthy subject under a visual stimulation was obtained in an
fMRI scanning machine. A black screen is presented to the subject for 30
seconds, then white dots appeared on the black screen and flew outwards
from center of the screen for 30 seconds. The two screens switched in every
30 seconds. A detailed experimental procedure and pre-processing procedure
can be found in Yamashita et al. (2005).
Yamashita et al. (2005) selected three regions of interest, primary visual
cortex (V1), visual cortex area 5 (V5) and posterior parietal cortex (PP).
They are reported to respond to human attention to visual motion. (B¨uchel
& Friston, 1997) The primary visual cortex (V1) is an entrance of visual
stimuli. Through V1 information is further transmitted to other visual areas,
such as visual areas V2, V3, V4 and V5. The visual area V5, also known as
visual area MT (middle temporal), is a region of extrastriate visual cortex
that is thought to play a major role in the perception of motion. Posterior
parietal cortex (PP) is another distinctive cortical area appearing to be
important for spatial processing and the control of eye movements, may
also have a central role in visual attention. We are interested in how is
connectivity among these areas in responding to visual stimulus.
In figure 1 we show the time series data on a time axis in second. The
data set contains four discontinuous segments. Each segment has 270 time
points covering 270 seconds. Yamashita et al. (2005) analyzed the time series
by VAR and adding the information of onset of stimulus as an exogenous
variable to the model. They reported that strong connectivity exists from
V1 to V5 and from V5 to PP at a period of 60 seconds, which is the time
between starting time of two consecutive stimuli.
3 Method and Result
We intend to fit the time series to a state space model and plot a causality
map based on the model. A latent variable is included in state vector in
2
0 90 180 270 360 450 540 630 720 810 900 990 1080
V5
V1
PP
time/seconds
Figure 1: fMRI BOLD signals under visual stimuli
3
order to get rid of a common dynamic which is driving the three cortex areas
simultaneously. Nevertheless, three individual driving noises representing
corresponding cortex areas pertain mutual causality, through a feedback
system provided by a transition matrix.
Let y
t
denote the observed data and x
t
the unobserved state. We assume
that x
t
depends on its past values through a linear stochastic model, con-
taining a dynamical noise term, and that y
t
follows from x
t
through a linear
observation model, containing an observation noise term; then the following
state space model applies:
x
t
= F x
t1
+ Gw
t
(1)
y
t
= Hx
t
+ ²
t
(2)
Equations (1) and (2) are commonly known as system equation and ob-
servation equation, respectively. w
t
denotes the dynamical noise term of
the system equation, assumed to follow a multivariate Gaussian distribu-
tion w
t
N (0, Q
t
), while ²
t
denotes the observation noise term of the
observation equation, assumed to follow a univariate Gaussian distribution
²
t
N (0, R).
Kalman (1960) introduced a filtering technique for state space mod-
els which can efficiently calculate the conditional prediction and condi-
tional filtered estimation of unobserved states. A comprehensive intro-
duction to state space models and Kalman filtering has been provided by
Kalman (1960), Harrison & Stevens (1976), Harvey (1989), Grewal & An-
drews (2001).
Since we aim at decomposing the time series into a common source com-
ponent and a particular source component, we choose a special structure for
the state space model, such that the last element of the state vector x
t
repre-
sents the common source component, and the former elements of the vector
form a 3-variate AR model. By this we should have a canonical form (Aoki,
1990) for the 3-variate AR and a coefficient for the common source along
the diagonal of F . The 3-variate AR should capture main characteristics of
the time series but the common source should only capture instantaneous
and simultaneous dynamic, therefore the coefficient for the common source
should be small, for instance 0.05.
The model parameters in Equations (1) and (2) are estimated from given
data by the maximum-likelihood method. Given a set of parameters, com-
putation of the likelihood from the errors of the data prediction through ap-
plication of the Kalman filter is straightforward; see Mehra (1971),
˚
Astr¨om
& Kallstrom (1973), Sorenson (1985) and Vald´es-Sosa et al. (1999) for a de-
tailed treatment. A maximum likelihood estimate for the state space model
4
is as follows.
F =
3.0165 0.1486 0.0516 1 0 0 0 0 0 0 0 0 1
0.0414 3.1754 0.0023 0 1 0 0 0 0 0 0 0 1.1335
0.0246 0.0690 3.1140 0 0 1 0 0 0 0 0 0 0.9725
3.6868 0.3493 0.0693 0 0 0 1 0 0 0 0 0 0
0.0307 4.0918 0.0083 0 0 0 0 1 0 0 0 0 0
0.0236 0.1487 4.0049 0 0 0 0 0 1 0 0 0 0
2.2126 0.2975 0.0214 0 0 0 0 0 0 1 0 0 0
0.0451 2.5811 0.0089 0 0 0 0 0 0 0 1 0 0
0.0801 0.1102 2.5351 0 0 0 0 0 0 0 0 1 0
0.5601 0.0873 0.0069 0 0 0 0 0 0 0 0 0 0
0.0348 0.6735 0.0007 0 0 0 0 0 0 0 0 0 0
0.0348 0.0219 0.6720 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0.0500
,
G =
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0
0 0 0 1
, H =
1 0 0 0 . . . 0
0 1 0 0 . . . 0
0 0 1 0 . . . 0
,
Q =
0.0460 0 0 0
0 0.0515 0 0
0 0 0.0631 0
0 0 0 0.0335
,
R =
0 0 0
0 0 0
0 0 0
.
Akaike information criterion(AIC), a value for comparing statistical models
by weighing likelihood function and number of model parameters, is 965.3
(= 879.3 + 2 × 43) for the state space model, comparing to 984.4 (= 900.4
+ 2 × 42) for a VAR(4) with full matrix of noise variance. It suggests that
the state space model should be a more suitable model to the time series.
In figure 2(a) we show the spectra of fMRI of, from left to right, PP,
V1 and V5, based on the estimated state space model. Each spectrum is
constituted of 4 colors, which corresponding to 4 system noises of the state
space model. By the state space structure we have already assume green,
red, yellow and black are respectively PP, V1, V5 and common source.
Through F , G and H, the 4 noises contribute to the time series distinctively,
showed by the mo del spectra. Among the 3 spectra, the one of V1 has the
5
(a)
0 0.1 0.2 0.3 0.4 0.5
0
50
100
150
200
250
300
350
0 0.1 0.2 0.3 0.4 0.5
0
200
400
600
800
1000
1200
0 0.1 0.2 0.3 0.4 0.5
0
50
100
150
200
250
300
(b)
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
(c)
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
Figure 2: Model spectra, NCR causality map and partial NCR causality
map of state space
6
highest power intensity at around 0.02, about 50-second period oscillation,
which can also be seen clearly from the data.
In figure 2(b) we show the NCR causality map, which obtained by nor-
malizing the spectra in (a). At each frequency the spectral power intensity
is squeezed into 0% to 100%, so that the ratio of contribution from each
noise variance can be seen clearly at each frequency. Since most power in-
tensity is dense between 0 to 0.06 interval, we shall explain causality based
on this interval. The black color is the noise in driving the time series si-
multaneously by assumption. We can see this common source explains over
50% of power intensity at 0Hz in all the spectra. Also, it has shared over
50% of power intensity of the lower frequency region of V1. Note that this
common source has been introduced to the state space model through an
AR process of coefficient 0.05, which meaning this noise is not providing an
additional degree of characteristic root to the transition matrix, but sparing
more room for the correlated residuals from AR.
In figure 2(c) we show the partial NCR causality map, when the contri-
bution of common source, ie black, is eliminated. These remaining colors
can tell the causality from these independent noises to the time series. V1
is showing up around low frequency range, saying that causality from V1 to
PP and V5 is significant. PP is causing V1 and V5 a little, mostly at the
neighborhood of 0.5 (20-25s period oscillation), and at the same time, V5 is
causing PP a little and V1 negligibly nothing.
We compare the above result to the causality result from AR, of which
noise variance matrix is diagonal, estimated by least square method. AIC
of an AR(4) with diagonal noise variance is 1452.7 (= 1374.7 + 2 × 39), a
value much greater than the AIC of state space model, meaning this AR(4)
is less suitable to the time series.
Ã
y
(1)
t
y
(2)
t
!
=
3.0183 0.1499 0.0504
0.0433 3.1768 0.0036
0.0262 0.0701 3.1151
y
(1)
t1
y
(2)
t1
y
(3)
t1
+
3.6896 0.3517 0.0674
0.0337 4.0944 0.0063
0.0212 0.1507 4.0064
y
(1)
t2
y
(2)
t2
y
(3)
t2
+
2.2126 0.2985 0.0205
0.0453 2.5821 0.0080
0.0805 0.1108 2.5357
y
(1)
t3
y
(2)
t3
y
(3)
t3
+
0.5589 0.0873 0.0070
0.0362 0.6734 0.0008
0.0360 0.0217 0.6720
y
(1)
t4
y
(2)
t4
y
(3)
t4
+
η
(1)
t
η
(2)
t
η
(3)
t
7
η
(1)
t
η
(2)
t
η
(3)
t
N
0
0
0
,
0.0797 0 0
0 0.0948 0
0 0 0.0949
By this AR(4) we plot model spectra in figure 3(a) and NCR causality map
in figure 3(b). To our surprise this NCR causality map is so similar to that in
Figure 2(b). On one hand it has proven that the common source component
was added to lessen the squares of residual but not to take away any model
characteristics by our assumption, and on the other hand we can assure our
result in state space is consistent.
(a)
0 0.1 0.2 0.3 0.4 0.5
0
50
100
150
200
250
300
350
0 0.1 0.2 0.3 0.4 0.5
0
200
400
600
800
1000
0 0.1 0.2 0.3 0.4 0.5
0
50
100
150
200
250
(b)
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
0 0.1 0.2 0.3 0.4 0.5
0%
20%
40%
60%
80%
100%
Figure 3: Model spectra and NCR causality map
4 Discussion
We proposed a new method to apply Akaike causality in state space frame-
work so that the only limitation of Akaike causality is solved when residuals
of VAR is highly correlated. Correlation between noises in VAR is sorted
out as an additional independent noise homogeneously driving multivariate
time series in a state space framework. By comparing the AIC we found
that the state space model fits better than the VAR.
The idea in this paper can be further extended. Besides a common source
component for all time series, some pairwise or tuple-wise common source
8
components can be added into the model. For instance in this paper, in
additional to the common source of black color, we can introduce one more
color for a common source of V1 and V5 but not PP, so that the common
source with the visual cortex can be further eliminated. More generally
we should add also the other two combinations of pairwise common source
components.
However, time series in real application often share common character-
istics. A common characteristics greatly shared by two time series may also
appear in other time series, that it should not be negligibly zero even though
its strength is small. Therefore pairwise common variables could be easily
absorbed by an overall homogeneous variable. See Tanokura & Kitagawa
(2003) for a similar treatment.
Like any other causality theory, Akaike causality has to be based on a
model. The goodness of an estimated model affects very much the causality
conclusion. Therefore, before drawing any causality conclusion, much effort
on finding a suitable model is necessary.
5 Appendix
5.1 State space model and its ARMA representation
Here we will give the ARMA representation of state space model. Referring
to Equation (1) and (2), let F , G and H are of size m × m, m × k and
` × m respectively. Then F has m eigenvalues, thus there is a characteristic
polynomial of order m, so that we can transform F linearly to zero by Cayley
Hamilton Theorem.
F
m
φ
1
F
m1
φ
2
F
m2
· · · φ
m1
F φ
m
I = 0 .
By this a linear state space model can be transformed to a VARMA in terms
of observed data y and noises η.
y
t
φ
1
y
t1
φ
2
y
t2
· · · φ
m
y
tm
Θ
0
η
t
+ Θ
1
η
t1
+ Θ
2
η
t2
+ · · · + Θ
m1
η
tm+1
+ Θ
m
η
tm
(3)
Let I be identity matrix.
Θ
0
=
¡
HG
I
¢
Θ
1
=
¡
H (F φ
1
I) G
φ
1
I
¢
Θ
2
=
¡
H
¡
F
2
φ
1
F φ
2
I
¢
G
φ
2
I
¢
.
.
.
Θ
m1
=
¡
H
¡
F
m1
φ
1
F
m2
· · · φ
m2
F φ
m1
I
¢
G
φ
m1
I
¢
Θ
m
=
¡
0
φ
m
I
¢
η
tj
=
µ
w
tj
²
tj
N (0, Σ) , Σ =
µ
Q 0
0 R
9
Autoregressive coefficients of the VARMA are scalars which are coefficients
of the characteristic equation of F . Moving average coefficients Θ are formed
by two block matrices, of sizes ` × k and ` × `, which depend on F , G and H
only. This noise vector η is stacked by w
t
and ²
t
vertically. Note that the
size of η is not necessary as same as that of y. Although the autoregressive
part is molded identically for all variables in y, the moving average part
refines each variable uniquely.
5.2 Akaike causality for VARMA and State Space
Here we will give derivation of Akaike causality of VARMA only. Akaike
causality for state space is straightforward by combining this result with the
formula in the previous subsection.
By Equation 3 we will obtain a power spectral density matrix P
f
for a
VARMA.
F
f
(Φ) = I +
p
X
j=1
Φ
j
e
2jf
, F
f
(Θ) =
q
X
j=0
Θ
j
e
2jf
,
P
f
= F
f
(Φ)
1
F
f
(Θ) Σ F
f
(Θ)
H
n
F
f
(Φ)
1
o
H
.
At each frequency f, the diagonal elements of P
f
are spectral density of
time series and the off-diagonal elements are cross spectral density. If Σ
is a diagonal matrix, each diagonal elements of P
f
is weighted sum of the
diagonals of Σ. By this Akaike NCR causality is defined by the proportion
of power from one noise variance to the power from all noise variance.
NCR
¡
σ
2
, y
t
¢
=
spectral density going to y
t
from σ
2
total spectral density going to y
t
from all variances
Acknowledgements
The authors would like to thank Dr Okito Yamashita and Prof Norihiro
Sadato for providing the fMRI data, and special thanks to Prof Rolando
Biscay for his comments and guidance.
This work was supported by the Atsumi International Scholarship Foun-
dation, the Iwatani Naoji Foundation, Research Institute of Science and
Technology for Society of the Japan Science and Technology Agency and
the Japanese So ciety for the Promotion of Science through Kiban B no.
173000922301.
10
References
Akaike, H. (1968). On the use of a linear model for the identification of
feedback systems. Annals of the Institute of Statistical Mathematics 20
425–439.
Aoki, M. (1990). State Space Modeling of Time Series. New York: Springer-
Verlag.
˚
Astr
¨
om, K. J. & Kallstrom, C. G. (1973). Application of system iden-
tification techniques to the determination of ship dynamics. In P. Eykhoff,
ed., Identification and system parameter estimation. Amsterdam: North-
Holland.
B
¨
uchel, C. & Friston, K. J. (1997). Modulation of connectivity in visual
pathways by attention: Cortical interactions evaluated with structural
equation modelling and fMRI. Cereb. Cortex 7 768–778.
Geweke, J. F. (1982). Measurement of linear dependence and feedback
between multiple time series. Journal of the American Statistical Associ-
ation 77 304–324.
Grewal, M. S. & Andrews, A. P. (2001). Kalman filtering: Theory and
Practice Using MATLAB 2nd edition. New York: Wiley.
Harrison, J. & Stevens, C. F. (1976). Bayesian forecasting (with dis-
cussion). Journal of the Royal Statistical Society, Series B 38 205–247.
Harvey, A. C.
(1989).
Forecasting, structural time series models and the
Kalman filter. Cambridge: Cambridge University Press.
Kalman, R. E. (1960). A new approach to linear filtering and prediction
problems. Journal of Basic Engineering 82 35–45.
Mehra, R. K. (1971). Identification of stochastic linear dynamic systems.
American Institute of Aeronautics and Astronautics Journal 9 28–31.
Sorenson, H. W. (1985). Kalman Filtering: Theory and Application.
IEEE Press.
Tanokura, Y. & Kitagawa, G. (2003). Extended power contribution
that can be applied without indep endence assumption. Tech. Rep. 886,
The Institute of Statistical Mathematics.
Vald
´
es-Sosa, P., Jimenez, J. C., Riera, J., Biscay, R. & Ozaki, T.
(1999). Nonlinear EEG analysis based on a neural mass model. Biological
Cybernetics 81 348–358.
11
Wong, K. F. K. (2005). Multivariate Time Series Analysis of Het-
eroscedastic Data with Application to Neuroscience. Ph.D. thesis, Grad-
uate University for Advanced Studies.
Yamashita, O., Sadato, N., Okada, N. & Ozaki, T. (2005). Evaluat-
ing frequency-wise directed connectivity of bold signals applying relative
power contribution with the linear multivariate time series models. Neu-
roimage 25 478–490.
12
... A statistical time-series model was fitted to the data in order to explain the spatiotemporal dynamics of the data and the causal relationships between the movements of both hands. An AR model can be used to elucidate the propagation of information from the past to the future; however, it is difficult to describe causal relationships when the driving noise variances are highly correlated (Yamashita et al., 2005; Wong and Ozaki, 2007). One solution is to obtain data at a finer temporal resolution; another is to include a hidden variable to absorb the common dynamic among the variables (Wong and Ozaki, 2007). ...
... An AR model can be used to elucidate the propagation of information from the past to the future; however, it is difficult to describe causal relationships when the driving noise variances are highly correlated (Yamashita et al., 2005; Wong and Ozaki, 2007). One solution is to obtain data at a finer temporal resolution; another is to include a hidden variable to absorb the common dynamic among the variables (Wong and Ozaki, 2007). Besides the two oscillations that explained the individual dynamics of each hand, we introduced a hidden variable to measure the dynamics that were common to both the left hand and the right hand. ...
... The NCR gives a proportion of directional causality over the frequency interval from 0 Hz to the Nyquist frequency, which was 5 Hz in this study. The calculation of the NCR is straightforward (Wong and Ozaki, 2007). The spectral power over the frequency band of each component was computed using Simpson's numerical integration rule. ...
Article
Mirror-symmetrical bimanual movement is more stable than parallel bimanual movement. This is well established at the kinematic level. We used functional MRI (fMRI) to evaluate the neural substrates of the stability of mirror-symmetrical bimanual movement. Right-handed participants (n=17) rotated disks with their index fingers bimanually, both in mirror-symmetrical and asymmetrical parallel modes. We applied the Akaike causality model to both kinematic and fMRI time-series data. We hypothesized that kinematic stability is represented by the extent of neural "cross-talk": as the fraction of signals that are common to controlling both hands increases, the stability also increases. The standard deviation of the phase difference for the mirror mode was significantly smaller than that for the parallel mode, confirming that the former was more stable. We used the noise-contribution ratio (NCR), which was computed using a multivariate autoregressive model with latent variables, as a direct measure of the cross-talk between both the two hands and the bilateral primary motor cortices (M1s). The mode-by-direction interaction of the NCR was significant in both the kinematic and fMRI data. Furthermore, in both sets of data, the NCR from the right hand (left M1) to the left (right M1) was more prominent than vice versa during the mirror-symmetrical mode, whereas no difference was observed during parallel movement or rest. The asymmetric interhemispheric interaction from the left M1 to the right M1 during symmetric bimanual movement might represent cortical-level cross-talk, which contributes to the stability of symmetric bimanual movements.
... Figure 9 shows that in this case all the artifacts are explained by the additional variable, while the connectivity patterns between the 16 original variables included in the analysis remain intact. This effect has been previously described by Wong and Ozaki (2007) for the case when the innovations from the estimation of the multivariate AR model is not diagonal (not white innovations), meaning that some variance of the system is still not explained by the model. In this case, they propose to add an external (latent) variable, to gather all the unexplained variance. ...
... Here, Pz appears as a common source of causality for all voxels in the system, to account for the unexplained influence coming from F7, except for O1 and O2, which do not receive influenced from any other variable in the original system. This result is a beautiful example of the latent variable effect mentioned by Wong and Ozaki (2007). Note that Pz does not receive influence from any other variable. ...
Article
Full-text available
Due to its low resolution, any EEG inverse solution provides a source estimate at each voxel that is a mixture of the true source values over all the voxels of the brain. This mixing effect usually causes notable distortion in estimates of source connectivity based on inverse solutions. To lessen this shortcoming, an unmixing approach is introduced for EEG inverse solutions based on piecewise approximation of the unknown source by means of a brain segmentation formed by specified Regions of Interests (ROIs). The approach is general and flexible enough to be applied to any inverse solution with any specified family of ROIs, including point, surface and 3D brain regions. Two of its variants are elaborated in detail: arbitrary piecewise constant sources over arbitrary regions and sources with piecewise constant intensity of known direction over cortex surface regions. Numerically, the approach requires just solving a system of linear equations. Bounds for the error of unmixed estimates are also given. Furthermore, insights on the advantages and of variants of this approach for connectivity analysis are discussed through a variety of designed simulated examples.
... To test our hypothesis, cross correlation was applied as an analysis method to look into the relation between the timing of the synchronization of postural sway and the exchange of the type of visual input between paired individuals. To further test our hypothesis we dissociated the influence between paired participants using autoregressive (MVAR) model estimation and a causality analysis [32][33][34]. Finally, the time series of postural sway of the two individuals were simulated based on the estimated MVAR model. The results of the present study support our hypothesis, suggesting the significance of timing to individuals engaged in reciprocal interaction for lag-0 synchronization of postural sway. ...
... Multivariate autoregressive model estimation and causality analysis. Using an MVAR model, we computed the noise contribution ratio (NCR), an index representing the degree of influence between two participants [32][33][34]. From the analysis options available to us that result in the same computational output (i.e., the Granger causality test with the recent developments, e.g., [37]), we chose Akaike causality (see Ozaki, 2012 [33] for the comparison between the two analysis methods). We did so because this analysis method takes the power spectrums into account and is focused on computing the degree of influence between the variables we are interested in. ...
Article
Full-text available
People's behaviors synchronize. It is difficult, however, to determine whether synchronized behaviors occur in a mutual direction—two individuals influencing one another—or in one direction—one individual leading the other, and what the underlying mechanism for synchronization is. To answer these questions, we hypothesized a non-leader-follower postural sway synchronization, caused by a reciprocal visuo-postural feedback system operating on pairs of individuals, and tested that hypothesis both experimentally and via simulation. In the behavioral experiment, 22 participant pairs stood face to face either 20 or 70 cm away from each other wearing glasses with or without vision blocking lenses. The existence and direction of visual information exchanged between pairs of participants were systematically manipulated. The time series data for the postural sway of these pairs were recorded and analyzed with cross correlation and causality. Results of cross correlation showed that pos-tural sway of paired participants was synchronized, with a shorter time lag when participant pairs could see one another's head motion than when one of the participants was blindfolded. In addition, there was less of a time lag in the observed synchronization when the distance between participant pairs was smaller. As for the causality analysis, noise contribution ratio (NCR), the measure of influence using a multivariate autoregressive model, was also computed to identify the degree to which one's postural sway is explained by that of the other's and how visual information (sighted vs. blindfolded) interacts with paired par-ticipants' postural sway. It was found that for synchronization to take place, it is crucial that paired participants be sighted and exert equal influence on one another by simultaneously exchanging visual information. Furthermore, a simulation for the proposed system with a wider range of visual input showed a pattern of results similar to the behavioral results.
... Akaike Causality is a method to show the strength of causality between multiple variables based on dividing the power spectral density of an optimal autoregressive model [7]. The denser the power spectral density, then the stronger the causality between two indicators [8]. ...
Article
Full-text available
In order to study the influencing factors of light pollution in China, this study aims to explore the main influencing factors of light pollution in China. First, 30 indicators, including population density, electricity output, land and marine protected area, and GDP per capita, were screened from five dimensions: economic development, population, ecology and geography, and 197 regional samples were selected from the global scale for analysis. Through the factor analysis dimensionality reduction process, the 30 indicators were grouped into five dimensions: air pollution level, biodiversity, natural resource storage, population density and economic development index, with a cumulative explanation rate of 97.89%. Subsequently, the entropy weight method (EWM) and TOPSIS model were used to assess the light pollution risk of the sample. The entropy weighting method determined the weights of the factors as air pollution level (0.212), biodiversity (0.346), natural resource storage (0.144), population density (0.282) and economic development index (0.016). Based on these weights, the light pollution risk index was calculated for each region. Finally, Spearman correlation analysis showed that population density (0.8193) and air pollution level (0.5101) were significantly and positively correlated with the light pollution index, while the economic development index (0.07903), biodiversity (-0.095), and natural resource stock (-0.02292) were weakly correlated. The study suggests that high population density and air pollution are the main drivers of light pollution, and it is recommended that these factors be prioritized in urban planning and environmental management to effectively control light pollution.
... For the purpose of causality analysis in multivariate time series data, Akaike (1968) proposed to decompose the power spectral density into components, each coming from an independent noise of multivariate autoregressive model (MAR). Akaike's noise contribution ratio (NCR) causality has been applied to ship engineering (Otomo et al., 1972), nuclear power plant research (Fukunishi, 1977), neuroscience (Yamashita et al., 2005, Wong andOzaki, 2007) and physical science (Maki et al., 2008). Unlike the more popular Granger causality (Granger, 1969) concept which focuses only in the time domain, NCR is a direct frequency domain definition. ...
Article
Full-text available
Akaike's Noise Contribution Ratio (NCR) has been used for the analysis of causality of two-variable settings of biological time series in Neuroscience. In contrast to the conventional correlation definition, this methodology is able to detect the direction of the influence between two variables. However, if a third series intervention is taken into account, the validity of causality is questionable, since possible feedback with third series can induce spurious or indirect causality. In this paper, we introduce a modification to NCR that accounts for partial directed causality for the case of more than two variables (pNCR). We also extend this methodology for the case of non-stationary time series by means of the use of the sliding windows technique, which provides a time-frequency approach. This methodology produces a 2D matrix (time and frequency) of pNCR coefficients, which is difficult to interpret and visualize. To facilitate the visualization and interpretation of the pNCR for the case of non-stationary time series, we summarize the information of the spectrum of the pNCR as the area under the curve (pNCA), which projects this 2D matrix into the 1D space (a vector of coefficients), which shows the time course rate of influence from one variable to another in both directions.
... The method is illustrated by an application to fMRI data recorded under visual stimulation. (Wong & Ozaki, 2007) ...
Article
Full-text available
Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.
... In discrete time, it is clear that the covariance matrix of two or more time series may have cross-covariances that are due to an "environmental" or missing variable Z(t). This was discussed by Akaike and a nice example of this effect is described in Wong and Ozaki (2007), which also explains the relation of the Akaike measures of influence to others used in the literature. For continuous time (Comte and Renault, 1996) define strong (second order) conditional contemporaneous independence (not SCCi) if: (18) Note that this is the same definition for continuous time as for the discrete AR example (Eq. ...
Article
Full-text available
This is the final paper in a Comments and Controversies series dedicated to "The identification of interacting networks in the brain using fMRI: Model selection, causality and deconvolution". We argue that discovering effective connectivity depends critically on state-space models with biophysically informed observation and state equations. These models have to be endowed with priors on unknown parameters and afford checks for model Identifiability. We consider the similarities and differences among Dynamic Causal Modeling, Granger Causal Modeling and other approaches. We establish links between past and current statistical causal modeling, in terms of Bayesian dependency graphs and Wiener-Akaike-Granger-Schweder influence measures. We show that some of the challenges faced in this field have promising solutions and speculate on future developments.
Chapter
This chapter focuses on effective connectivity, the causal relationships that govern interactions between neural systems. Utilizing state-space models based on biophysically informed observations and equations, we explore the bases of neural communication. This study connects historical and contemporary perspectives in statistical causal modeling by evaluating Dynamic Causal Modeling (DCM), Granger Causal Modeling (GCM), and other approaches, highlighting the imperative of assigning priors to unknown parameters and confirming the identifiability of models. Our analysis, based on Bayesian dependency graphs and the Wiener–Akaike–Granger–Schweder metrics, sheds light on the complex nature of effective connectivity. This exploration advances our understanding of the operational principles that underlie the brain's effective connectivity.
Article
State space modeling is an established framework for analyzing stochastic and deterministic dynamical systems that are measured or observed through a stochastic process. This highly flexible paradigm has been successfully applied in engineering, statistics, computer science, and economics to solve a broad range of dynamical systems problems. The revolution in neuroscience recording technologies in the last 20 years has provided many novel ways to study the dynamic activity of the brain and central nervous system. These technologies include multielectrode recording arrays functional magnetic imaging electroencephalography and magneto encephalography, diffuse optical tomography, calcium imaging, and behavioral data. Because a fundamental feature of many neuroscience data analysis problems is that, the underlying neural system is dynamic and is observed indirectly through measurements from one or a combination of these different recording modalities, the state space paradigm provides an ideal framework for developing statistical tools to analyze neural data. Neural spiking activity recorded from single or multiple electrodes is one of the principal types of data recorded in neurophysiological experiments. Because neural spike trains are time series of action potentials, point process theory has been shown to provide an accurate framework for modeling the stochastic structure of single and multiple neural spike trains.
Article
The statistical analysis of spatial and temporal data is discussed from the viewpoint of an fMRI connectivity study. The limitations of the well-known SPM method for the characterization of fMRI connectivity study are pointed out. The use of an innovation approach with NN-ARX is suggested to overcome the limitations of the SPM modeling. The maximum likelihood method is presented for the NN-ARX model estimation. The exploratory use of innovations for the identification of brain connectivity between remote voxels is discussed.
Article
Full-text available
This book provides readers with a solid introduction to the theoretical and practical aspects of Kalman filtering. It has been updated with the latest developments in the implementation and application of Kalman filtering, including adaptations for nonlinear filtering, more robust smoothing methods, and developing applications in navigation. All software is provided in MATLAB, giving readers the opportunity to discover how the Kalman filter works in action and to consider the practical arithmetic needed to preserve the accuracy of results. Note: CD-ROM/DVD and other supplementary materials are not included as part of eBook file. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department -- to obtain the manual, send an email [email protected] /* */
Article
The problem of determining ship dynamics is approached from the point of view of system identification. Free steering experiments on full-scale ships are considered. The data obtained in the experiment is used to estimate the parameters in a mathematical model of the ship. The structure of the model is determined from the dynamic equations of the ship. The results indicate that it is possible to obtain models for ship dynamics using the proposed scheme.
Book
model's predictive capability? These are some of the questions that need to be answered in proposing any time series model construction method. This book addresses these questions in Part II. Briefly, the covariance matrices between past data and future realizations of time series are used to build a matrix called the Hankel matrix. Information needed for constructing models is extracted from the Hankel matrix. For example, its numerically determined rank will be the di­ mension of the state model. Thus the model dimension is determined by the data, after balancing several sources of error for such model construction. The covariance matrix of the model forecasting error vector is determined by solving a certain matrix Riccati equation. This matrix is also the covariance matrix of the innovation process which drives the model in generating model forecasts. In these model construction steps, a particular model representation, here referred to as balanced, is used extensively. This mode of model representation facilitates error analysis, such as assessing the error of using a lower dimensional model than that indicated by the rank of the Hankel matrix. The well-known Akaike's canonical correlation method for model construc­ tion is similar to the one used in this book. There are some important differ­ ences, however. Akaike uses the normalized Hankel matrix to extract canonical vectors, while the method used in this book does not normalize the Hankel ma­ trix.
Article
Measures of linear dependence and feedback for multiple time series are defined. The measure of linear dependence is the sum of the measure of linear feedback from the first series to the second, linear feedback from the second to the first, and instantaneous linear feedback. The measures are nonnegative, and zero only when feedback (causality) of the relevant type is absent. The measures of linear feedback from one series to another can be additively decomposed by frequency. A readily usable theory of inference for all of these measures and their decompositions is described; the computations involved are modest.
Article
Measures of linear dependence and feedback for two multiple time series conditional on a third are defined. The measure of conditional linear dependence is the sum of linear feedback from the first to the second conditional on the third, linear feedback from the second to the first conditional on the third, and instantaneous linear feedback between the first and second series conditional on the third. The measures are non-negative and may be expressed in terms of measures of unconditional feedback between various combinations of the three series. The measures of conditional linear feedback can be additively decomposed by frequency. Estimates of these measures are straightforward to compute, and their distribution can be routinely approximated by bootstrap methods. An empirical example involving real output, money, and interest rates is presented.
Article
In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.