Content uploaded by Hiroshi Yamada
Author content
All content in this area was uploaded by Hiroshi Yamada on Feb 16, 2025
Content may be subject to copyright.
Article
iScience
Formation of brain-wide neural geometry during
visual item recognition in monkeys
Graphical abstract
Highlights
dNeural geometries are thought to reflect computations of
information in the brain
dWe re-analyzed 10 neural populations, including 2,500
neurons of nine monkeys
dRotational/curvy/straight geometries distributed in the
temporal-frontal network
dInformation propagates as a heterogeneous mixture of
stochastic population signals
Authors
He Chen, Jun Kunimatsu,
Tomomichi Oya, ...,
Takafumi Minamimoto, Yuji Naya,
Hiroshi Yamada
Correspondence
h-yamada@md.tsukuba.ac.jp
In brief
Biological sciences; Natural sciences;
Neuroscience; Systems neuroscience
Chen et al., 2025, iScience 28, 111936
March 21, 2025 ª2025 The Author(s). Published by Elsevier Inc.
https://doi.org/10.1016/j.isci.2025.111936 ll
iScience
Article
Formation of brain-wide neural geometry
during visual item recognition in monkeys
He Chen,
1,2,13
Jun Kunimatsu,
3,4,5,13
Tomomichi Oya,
6,7
Yuri Imaizumi,
8
Yukiko Hori,
9
Masayuki Matsumoto,
3,4
Yasuhiro Tsubo,
10
Okihide Hikosaka,
5
Takafumi Minamimoto,
9
Yuji Naya,
1,11,12
and Hiroshi Yamada
3,4,14,
*
1
School of Psychological and Cognitive Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China
2
Department of Physiology and Biophysics, Washington National Primate Research Center, University of Washington, Seattle,
WA 98195, USA
3
Division of Biomedical Science, Institute of Medicine, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
4
Transborder Medical Research Center, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
5
Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
6
Western Institute for Neuroscience, University of Western Ontario, London, ON N6A3K7, Canada
7
Department of Physiology and Pharmacology, University of Western Ontario, London N6A 3K7, Canada
8
College of Medical Sciences, University of Tsukuba, 1-1-1 Tenno-dai, Tsukuba, Ibaraki 305-8577, Japan
9
Advanced Neuroimaging Center, National Institutes for Quantum Science and Technology, 4-9-1 Anagawa, Inage-ku,
Chiba 263-8555, Japan
10
College of Information Science and Engineering, Ritsumeikan University, 2-150 Iwakura-cho, Ibaraki, Osaka 567-8570, Japan
11
IDG/McGovern Institute for Brain Research at Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China
12
Beijing Key Laboratory of Behavior and Mental Health, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China
13
These authors contributed equally
14
Lead contact
*Correspondence: h-yamada@md.tsukuba.ac.jp
https://doi.org/10.1016/j.isci.2025.111936
SUMMARY
Neural dynamics are thought to reflect computations that relay and transform information in the brain. Pre-
vious studies have identified the neural population dynamics in many individual brain regions as a trajectory
geometry, preserving a common computational motif. However, whether these populations share particular
geometric patterns across brain-wide neural populations remains unclear. Here, by mapping neural dy-
namics widely across temporal/frontal/limbic regions in the cortical and subcortical structures of monkeys,
we show that 10 neural populations, including 2,500 neurons, propagate visual item information in a stochas-
tic manner. We found that visual inputs predominantly evoked rotational dynamics in the higher-order visual
area, TE, and its downstream striatum tail, while curvy/straight dynamics appeared frequently downstream in
the orbitofrontal/hippocampal network. These geometric changes were not deterministic but rather stochas-
tic according to their respective emergence rates. Our meta-analysis results indicate that visual information
propagates as a heterogeneous mixture of stochastic neural population signals in the brain.
INTRODUCTION
Visual inputs activate a large number of neurons in the brain that
construct numerous neural networks to process information in
an environment.
1–3
This brain-wide activity change reflects the
information processing embedded in each individual neural cir-
cuit; however, limitations of spatial and temporal resolution in
the measurements of circuitry activity disrupt our understanding
of brain-wide visual information processing.
4–9
Under this limita-
tion, considerable attempts have been made toward under-
standing how the brain processes information using a variety
of developing theoretical frameworks.
10–15
One of the analytic frameworks developed within the last two
decades is state-space analysis,
16
which provides a mechanistic
structure of information processed in the lower-dimensional
space of a neural population.
17–19
This analytical tool identifies
dynamic neural population structures that reflect information
processing for general biological features
20,21
and allowed us
to describe those features as a neural geometry with high tempo-
ral- resolution
13–15
in the sub-second order. A large number of
neural circuits may process information moment by moment,
6
and they may form a population geometry, such as rotational,
18
curvy
19,22
or straight
23
geometries as the typical and basic fea-
tures of dynamics. A recent finding suggests that the combina-
tion of neural population geometries may be the key to process-
ing information to transform sensory inputs into memory.
24
Recent studies have extended these analytical frameworks
25–29
;
however, because a limited number of studies compared among
brain-wide neural populations, how the brain processes informa-
tion in the form of geometry remains elusive (except
30
).
To examine how brain-wide neural dynamics are formed
to process visual information, we accumulated the neural
iScience 28, 111936, March 21, 2025 ª2025 The Author(s). Published by Elsevier Inc. 1
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
ll
OPEN ACCESS
population data of monkeys from four laboratories that con-
tained 10 neural populations, including 2,500 neurons across
temporal/frontal/limbic networks (i.e., meta-analysis). We applied
the state space analysis (i.e., dimensionality reduction) to the
neural population data followed by a bootstrap resampling tech-
nique that detects and replicates neural modulation dynamics in
a low-dimensional neural space as a parental population.
Following the analysis of bootstrap replicates with the Lissajous
curve function, our cross-study comparison revealed that a
gradual shift in stochastic neural population signals occurred
throughout the temporal-to-frontal brain regions.
RESULTS
We compared the trajectory geometries across many neural
populations widely distributed in the brain from the output brain
regions in the ventral visual pathway
31–34
and its downstream
brain regions that may access memories associated with a visual
stimulus. These included ten brain regions that were accumu-
lated from nine monkeys examined in the four laboratories
(Table S1), from the higher-order visual area TE and their down-
stream brain regions in cortical, subcortical, and limbic struc-
tures, such as the temporal/orbitofrontal cortices, striatum,
and hippocampus (HPC) (Figure 1A, No. 1 to 10). A total of
2,500 neurons were accumulated across the four behavioral
tasks (Figure S1), in which visual items provided monkeys with
position and/or reward information during the active (Exps. 1
and 3) and passive (Exps. 2 and 4) behavioral responses. All
these behavioral tasks require monkeys to perceive visual cues
to perform the task: Exp. 1. Item-location-retention (ILR) task;
Exp. 2. Scene-based object-value task; Exp. 3. Delayed reward
tasks; Exp. 4. Single cue task (see STAR Methods). Using state-
space analysis, we characterized structures of neural population
geometry that appeared in the lower dimensional neural space,
which describes how neural modulation by the task parameters
of interest processes information at the population level
23,29
(i.e.,
targeted dimensionality reduction
35
).
We found all three types of geometric patterns, rotational,
curvy, and straight geometries, in the top two dimensions
(Figures 1B–1D, see Figure S2 for performance as the percent-
age of variance explained), including unclear structures based
on visual inspection (Figure 1E). All 10 neural populations
showed a significant structure at the principal component (PC)
1–2 plane based on shuffle controls (Figure S3,p< 0.05 for all
PC1 and PC2). PC3 did not show statistical significance in
some neuronal populations (Figure S3, Exps. 2 and 4; compare
the black and gray dots for each PC). These identified geometric
structures appeared to be distributed from complex to simple,
reflecting the circuit distance to the visual input (Figure 1). For
example, TE (Figure 1B, No. 1, A11 plane in A) and its down-
stream region striatum tail (STRt) (Figure 1B, No. 2, A11 plane
in A), showed rotational geometries (i.e., circle) during visual
item recognition. In more detail, at the beginning of information
processing after the visual item presentation (Figure 1B left,
see green sat time = 0), the neural state was positioned at
around the center of the PC1-2 plane, and then rotated at
approximately 0.2 s and started to move the second quadrant
with the counterclockwise rotation going back close to the initial
point (see green e0.6 s after visual item onset). An opposite rota-
tion was observed for the worst visual item, with a smaller
change (orange). These rotational structures were also observed
at the STRt on a similar timescale (Figure 1B, right).
In the downstream brain regions, such as the perirhinal cortex
(PRC) and caudate body (CDb), rotational or curvy dynamics
were observed (Figure 1C, No. 3, PRC, A11 plane in A and No.
4, CDb, A23 plane in A), which were characterized by a half
Figure 1. Neural population geometries in the visual memory pathway
(A) Anatomical depiction of neural populations obtained from the 10 brain regions in nine macaques during the four different behavioral tasks in Exps. 1 to 4.
(B–E) Rotational (B), curvy (C), straight (D), and unclear dynamics (E) detected by visual inspection. Single trajectory geometry was obtained in each brain region
using neural population data. Number of neurons were 116–590 (Table S1). In A-E, the 10 brain regions are numbered as follows: 1. TE, 2. STRt, 3. PRC, 4. CDb, 5.
HPC, 6. VS, 7. cOFC, 8. CDh&b, 9. PHC, and 10. mOFC. The 0.05 s time bin was used for the analysis. The time from visual stimulus appearance was shown in
sec. Characters indicate stimulus conditions; Ib: best item; I2 to I7: second best to seventh best item; Iw: worst item; D: delay, M: magnitude; P: probability. See
also Figures S1–S3 and Table S1.
2iScience 28, 111936, March 21, 2025
iScienc
e
Article
ll
OPEN ACCESS
rotation ending at the opposite neural space and endpoints devi-
ating from the initial point (i.e., half cycle), although it was unclear
whether they showed rotational or curvy dynamics. In contrast,
straight dynamics were observed in brain regions far away
from the visual inputs in the HPC (Figure 1A, No. 5 in the A11
plane; Figure 1D) and the central part of the orbitofrontal cortex
(cOFC) (Figure 1A, No. 7 in the A32 plane; Figure 1D). In addition,
the ventral striatum (VS) showed straight dynamics (Figure 1A,
No. 6 in the A23 plane; Figure 1D), although some structures
could not be clearly determined (Figure 1E, parahippocampal
cortex, PHC, medial orbitofrontal cortex, mOFC). The straight
dynamics also showed a geometric change back and forth along
the straight trajectory (Figure 1D, i.e., overlapping continuous
line). We note that one neural trajectory was obtained from a
population of neurons in this analysis method.
In short, these qualitative observations based on visual in-
spection suggest that neural population structures may change
dynamically through the visual recognition process, and the shift
of neural population geometries might occur throughout the
cortical and sub-cortical structures across the temporal/fron-
tal/limbic network.
Evaluation of geometric patterns based on their
selected features
To quantify these geometric patterns occurring at approximately
half a second of visual item recognition, we estimated indices for
the characteristics of dynamic neural changes in the low-dimen-
sional neural state (Figure 2A). They were as follows: accumu-
lated angle difference weighted by deviance, Sdq, which reflects
a measure similar to the centrifugal force (see STAR Methods for
details, Figure 2A, top); mean distance of vectors (d,Figure 2A,
top); rotational speed (q/0.1 s, Figure 2A, bottom); and distance
between start and end of trajectory (d
S-E
,Figure 2A, bottom). We
performed bootstrap resampling in each of the 10 neural popu-
lations to reconstruct neural geometry as a parental population
1000 times (see STAR Methods).
The bootstrap resampling is an analytic technique to recon-
struct the parental population from the observed sample
data,
36
which can provide insight into the types of neural popu-
lation geometry represented in the parental population. Then, we
estimated the parameter values for each replicated neural geom-
etry that provides the parental distribution of the geometric fea-
tures. This bootstrap resampling was aimed to examine how
dominantly a particular neural population geometry was
observed in the replicates of 10 neural populations, and thus,
we made clustering across all the replicated data to identify
whether the single or combination of particular geometric fea-
tures was observed in some brain regions (otherwise, single
cluster occupies all replicates in a single population in a brain
region).
We found that across the 10 neural populations, these indices
captured geometric features to some extent, corresponding
to the rotational geometries observed in the TE and STRt
(Figures 2B–2E). For instance, identified clusters based on
Figure 2. Quantitative evaluation of geometric structures according to the rotational features
(A) Schematic depictions of the estimation of accumulated angle difference weighted by the deviance, Sdq. The accumulated angle difference indicates the
degree of geometric change in terms of the rotational force across time. Vector distance (d), rotational speed (q/0.1s), and start to endpoint distance (d
S-E
) were
also estimated.
(B) Dendrogram estimated from these four parameter values based on bootstrap resampling across 10 neural populations.
(C) Percentage of variance explained by PCA of bootstrap resampling data across 10 neural populations.
(D) Clusters detected among the four parameters based on the PCA. Dots represent replicates composed of 20,000 (1000 replicates in each 10 brain regions
times two task conditions).
(E) Percentage of the identified clusters in each of the 10 brain regions. Each neural population contained two components of neural information: the best (B) and
worst (W) conditions in Exps. 1 and 3, magnitude (M) and delay (D) of the rewards in Exp. 2, and magnitude (M) and probability (P) of rewards in Exp. 4. Colors on
the atlas indicate geometry types on visual inspection in Figure 1A.
iScience 28, 111936, March 21, 2025 3
iScienc
e
Article
ll
OPEN ACCESS
the dendrogram and principal component analysis (PCA)
(Figures 2B and 2C) showed that they possess a rotational char-
acteristic with high rotational speed (Figure 2D, right, red), large
Sdq(Figure 2D, middle, red), large d(Figure 2D, middle, red), and
small d
S-E
(Figure 2D, left, red), which occupy more than 90% of
the STRt replicates in the best item condition (Figure 2E, see also
Figure 1B, right green trajectory). A smaller rotational structure
characterized by smaller values of Sdqwas also captured by
another cluster (Figures 2D and 2E, shallow red), which occupied
approximately 90% of the STRt replicates in the worst item con-
dition (Figure 2E, see also Figure 1B right, orange trajectory).
These rotational features were observed in other temporal brain
regions (Figure 2E, see reddish, more than 50% in TE, 20–40% in
PRC and PHC), but rarely observed at the frontal/limbic brain re-
gions, such as HPC (less than 10% in all remaining brain re-
gions). In contrast, curvy/straight dynamics were observed in
other clusters in the downstream brain regions (Figure 2E, green
and blue).
Collectively, across the replicates obtained from the 10 neural
populations, a cluster with high rotational speed occupied the
STRt and half of the TE populations (Figures 2D and 2E, reddish),
while the curvy/straight dynamics occupied most of the repli-
cates in the remaining cortical and subcortical regions (Figures
2D and 2E, blue and green). We note that the meaning of these
geometric differences among brain regions will be discussed
at the discussion section.
Parameterization of geometric patterns using Lissajous
curve function
To parameterize these geometric features in more detail, we
fitted the Lissajous curve function
37
to the replicated data, which
can mathematically capture all rotational, curvy, and straight ge-
ometries by the single function. In the Lissajous function, any
two-dimensional geometric features represented by F(x, y) are
captured using the following equations:
x=Ax cosðuxtðiÞ+FxÞ+bx (Equation 1)
y=Ay cosðuytðiÞ+FyÞ+by (Equation 2)
where uand F(i.e., ux, uy, Fx, and Fy)representthecycleof
the rotation and their deviation as a function of time, t(i). t(i)
takes the values from 0 to 0.6 s in all the four experiments,
and thus, one cycle of the trajectory is represented as 0 to
3.33 pfor u. For the horizontal and vertical axes, Ax and Ay
determine the amplitude of the trajectory, respectively,
whereas bx and by determine its location on the PC1-2 plane.
In this function, rotational dynamics are represented by the
same uamong xand yformulas, and 0.5 pcycle differences
in Fvalues between xand yformulas (Figure 3A, left). We
note that this rotational example represents less than one cycle
due to ux=3.0p. In contrast, straight dynamics are repre-
sented with the same uand also same Fvalues between the
two formulas (Figure 3A, right). Curvy dynamics are repre-
sented with some difference of uand same Fvalues (Figure 3A,
middle) (see also Figure S4). We fitted this Lissajous curve func-
tion to each of the 20,000 bootstrap replicates derived from the
10 neural populations (see STAR Methods, 1000 replicates
times 10 populations times two conditions). For instance, three
replicated examples obtained from the HPC population were
well captured by the Lissajous curve function, as rotational-
to-straight trajectories (Figure 3B). We obtained all the esti-
mates for these fitted parameters (Figure 3C), and thereafter,
applied clustering to these parameters’ data (Figures 3D–3F)
to identify geometry types as a function of the Lissajous curve
parameters.
We found that the rotational dynamics (Figures 3G and 3H,
clusters C1-C5, reddish) appeared at the TE and STRt, which
occupied high percentages of these neural populations (Fig-
ure 3H, approximately 70%), and they were also observed in
more than 50% of temporal brain regions. Cluster 5 seemed to
have the intermediate characteristics between rotational and
curvy structures; if we define this cluster 5 as the curvy one,
the rotational percentage becomes low in the PRC, PHC, and
HPC (30–40%), but not in the STRt (50–70%). Curvy structures
were predominantly observed in the CDb population, with
more than 40% occurrence (Figures 3G and 3H, Clusters 6
and 7, greenish). These clusters were not observed deterministi-
cally but rather stochastically, as also seen in the predominant
percentages of intermediate features between curvy and straight
dynamics (Figures 3G and 3H, Cluster 8). Straight dynamics
were predominant at the frontal brain regions, while they were
also observed at the temporal cortices (Figures 3G and 3H, Clus-
ters 8–10, bluish). Even with the STRt replicates, the neural pop-
ulation contained curvy or straight dynamics in more than 20%.
These heterogeneous mixtures of replicated signals suggested
that neural dynamics emerged in a stochastic manner at popula-
tion level, with a functional gradient in the temporal/frontal/limbic
networks of cortical and subcortical structures. The brain-wide
neural population may propagate item information as a hetero-
geneous mixture at approximately half a second. We note that
our results do not reflect trial-by-trial variability of neural popula-
tion activity. Additionally, we acknowledge that our results
contain some noise, which contributes to stochastic differences
among replicates to some extent.
DISCUSSION
Collectively, our meta-analysis results indicated that the rota-
tional, curvy, and straight neural geometries were found in neural
populations across temporal-frontal-limbic cortical sub-cortical
networks (Figure 1). Orbitofrontal cortex (cOFC and mOFC)
and their target subcortical brain region VS predominantly
showed curvy, straight, and their intermediate dynamics
(Figures 3G and 3H). These observed dynamics exhibited
maximum modulation at approximately 0.3 s after visual onset,
except in the slowest VS dynamics (Figure 1D). Under our task
conditions, the rotational dynamics, in contrast, were observed
at the temporal cortices and their connected striatal regions, at
the relatively shorter latency around 0.2 s when rotation started
(Figure 1B). Whereas different monkeys performed active
(Exps. 1 and 3) and passive (Exps. 2 and 4) behavioral tasks,
the rotational dynamics were observed predominantly in the
STRt and TE (Figure 3H). In contrast, straight dynamics started
its geometric change approximately after 0.2 s of the visual onset
4iScience 28, 111936, March 21, 2025
iScienc
e
Article
ll
OPEN ACCESS
Figure 3. Quantitative evaluation of geometric structures using the Lissajous curve function
(A) Schematic depictions of trajectory geometries using Lissajous function parameters. For all figures, uxand uyare 3 p.
(B) Three examples of bootstrap replicates for the HPC population fitted by the Lissajous function. These examples were obtained at the best stimulus condi tion. L
indicates the maximum log likelihood. Estimated parameters were as follows: left, ux, 2.78 p,Fx,0.11, L, 34.6, uy, 2.96 p,Fy, 0.10, L, 31.6; middle, ux, 2.51 p,
Fx,0.03, L, 24.2, uy, 3.82 p,Fy,0.08, L, 32.7; right, ux, 2.21 p,Fx,0.16, L, 26.9, uy, 2.78 p,Fy, 0.58, L, 29.1.
(C) Probability density estimated for Lissajous parameters obtained from bootstrap replicates across 10 neural populations times two conditions.
(D) Dendrogram estimated from Lissajous parameter values based on bootstrap resampling across 10 neural populations times two conditions.
(E) Percentage of variance explained by PCA of bootstrap resampling data across 10 neural populations times two conditions.
(F) Clusters determined using PCA. Data are shown for PC1 to 3.
(G) Reconstructed trajectory in each cluster based on bootstrap resampling. The trajectories in clusters 1–10 were drawn using the median values of the Lissajous
parameters in each cluster.
(H) Percentage of clusters in each of the 10 brain regions times two conditions. BW: best and worst conditions. MD: magnitude and delay conditions. MP:
magnitude and probability conditions. See also Figure S4.
iScience 28, 111936, March 21, 2025 5
iScienc
e
Article
ll
OPEN ACCESS
(Figure 1D, see the distance between initial point S and 0.2 s
location), indicating that they follow the rotational dynamics in
processing visual information.
In our dataset, monkeys were engaged in the tasks involving
stimulus-location association (Exp. 1), stimulus-reward associa-
tion (Exp. 2), stimulus-reward association for delay and magni-
tude (Exp. 3), and stimulus-reward association for probability
and magnitude (Exp. 4). All the four tasks required monkeys to
process visual memory associations to perform the task. Taken
together, while these neural geometries are observed during
visual recognition of such associations, which reflect visual
memory processing, these three geometries were distinctive in
terms of geometric patterns and their dynamic changes over
time (Figure 4A for summary), in which a rotational/curvy change
was followed by a change in straight dynamics.
Function, geometries, and localization
Previous studies showed that rotational dynamics have been un-
covered broadly in the primary sensory
24,38
and motor
39
cortices, which are closer to the inputs and outputs of the brain,
such as motor unit activity.
40
Other studies have shown that the
prefrontal cortex
35,41
and parietal cortex
22,38
exhibit curvy dy-
namics. Does this evidence mean that each brain region repre-
sents a particular geometry type on the activity of neural
population?
It is likely that the brain regions close to the input/output of the
central nervous system may represent particular types of geom-
etries, while the higher-order brain regions may represent
various types of geometries. Because geometries reflect neural
activity change during behavioral tasks, flexibility of the cognitive
function must be related to the geometric type-change. In our
case, we found curvy dynamics in the CDb, where action infor-
mation was transferred from the cortices
42
in the Exp. 3 (Fig-
ure 3H), but more heterogeneous geometries were observed
from CDh&b in the Exp. 4 (Figure 3H). Although the recorded
areas of neurons were not perfectly matched between these
two experiments, geometric patterns must be changed related
to the task demand to the subjects.
In the present study, we specifically focused on neural dy-
namics in two core senses: (1) low-dimensional geometries
and (2) neural modulation dynamics. First, although these data
were obtained from four different laboratories using distinctive
behavioral tasks during the passive and active responses of
monkeys, the low-dimensional features of neural dynamics are
thought to be preserved across mammals in the brain-wide
network.
43,44
A recent study provides clear evidence that
different animal species share and preserve their neural geome-
tries during behavioral tasks.
43
Indeed, it is suggested that a low-
dimensional manifold in the neural state-space might be one of
the representational states of biologically relevant information,
similar to many combinations of physical properties in the
world.
45
Second, the dynamics in neural modulations examined
here are comparable with those using the standard analytic
frameworks in the rate coding model, which has provided a
huge amount of knowledge corresponding to low-dimensional
neural activity modulation in the literature, such as the Gabor
function in the visual cortices,
46,47
movement direction and mus-
cle force in the motor cortex,
48,49
reward value in the parietal cor-
tex
50
and frontal cortical and subcortical regions,
51,52
action
values in the striatum,
53,54
and comprehension of the location
of animals during navigation in the HPC.
55
Thus, in our analysis,
the dynamics of these well-known brain features were compared
as the geometric patterns across neural populations during vi-
sual recognition.
29
One concern with our meta-analysis approach is that there
may be limitations in data interpretation in terms of data sharing
and comparisons across different behavioral tasks and different
individual animals. Is it possible to compare the neural popula-
tion trajectory using accumulated data across animals and tasks
with a certain analytical tool? In previous literature, when
analyzing neural modulation using a linear regression model,
most neurophysiologists compare the extent of neural modula-
tions between different animals and between behavioral tasks.
In one study, a challenge was made to compare the neural mod-
ulation dynamics as the trajectory geometry between different
laboratories’ data.
56
Thus, the neural trajectory would be com-
parable among our shared data with greater deliberation.
Neural population stochasticity reconstructed using the
bootstrap resampling
Our result showed that variability of neural population geometry
is distinct among the 10 neural populations examined (Figure 4B,
summary for Figure 3H). This raised the question of why such
variabilities is observed in the reconstructed neural geometries.
Using bootstrap resampling, we reconstructed the parental pop-
ulation of geometry from observed sample data. In this
approach, (1) we generated replicates of neural populations
through random resampling of neuronal activity, allowing
Figure 4. Summary of the observed dy-
namics and anatomical connections in the
visual memory pathway
(A) Geometries depicted in the same arbitrary
scales on the PC1-2 plane for the eight neural
populations shown in Figures 1B–1D. The start of
the trajectory (S) is aligned to describe each tra-
jectory. e indicates the end of the trajectory at
0.6 s.
(B) Proportion of the clusters defined in each of the
10 brain regions are described with the anatomical
connection. Reddish: rotational, greenish: curvy,
bluish: straight dynamics. Data from CDh&b and
CDb are merged (CD).
6iScience 28, 111936, March 21, 2025
iScienc
e
Article
ll
OPEN ACCESS
duplicates when resampling from a single neural population da-
taset (i.e., each population data shown in Table S1). The number
of resamples was matched to the number of recorded neurons in
each neural population dataset (e.g., 295 and 407 times in TE
and PRC, respectively). (2) This process was repeated 1,000
times for each neural population to yield 1,000 replicates. (3)
Each replicate was analyzed using dimensionality reduction to
produce 1,000 replicated geometries per neural population.
This approach provided parental distributions of neural-geome-
try parameters for each of the 10 neural populations (Figures 2
and 3). Then, clustering analysis revealed that some populations
such as, STRt, showed nearly uniform trajectories, while others,
such as PRC, displayed multiple trajectory types (Figures 2E and
3H). These trajectory patterns (Figures 3F–3H) indicated that
parental populations have considerable variabilities.
The observed variability of these replicated parameters in
multidimensional space (Figures 2D and 3F) may reflect func-
tional aspects of neural population dynamics. One possibility is
that the variabilities arise from dynamic engagement within the
neural population—some neurons contribute to information pro-
cessing at one moment but may not participate at another
moment at the population level (a phenomenon we term ‘‘popu-
lation stochasticity’’). Another possibility is that heterogeneity
within the neural population contributes to this variability due
to the random resampling process. However, the latter explana-
tion is less likely because, with a larger neural population, repli-
cate variability should decrease. Interestingly, such a decrease
did not occur even in the largest sampled population, HPC,
with 590 neurons (Figure 3, HPC).
These findings suggest that neural population stochasticity is
key to understanding how large-scale networks process infor-
mation. Importantly, we note that this variability does not reflect
trial-by-trial variability of neural population activity, as no simul-
taneous neural recordings were made in this study.
Functional implications of neural geometries
Our findings would add to the emerging literature describing
how visual inputs alter the brain-wide neural dynamics associ-
ated with visual memories by connecting neural geometry
types and their alignments across many brain regions. Previous
studies have shown the existence of different types of neural
population dynamics in each individual study.
22,24,35,38,39,41
Although some of these dynamics may reflect task demand,
as observed in the dorsolateral prefrontal cortex,
35,57
they are
difficult to disentangle from changes in behavioral and neural
activity levels, and may involve some transformation of infor-
mation for behavioral responses.
58
It is possible that the dorsal
and motor-related brain regions have this type of flexibility
in their dynamics, as partly observed in this study in the
CDb, where curvy dynamics were predominantly observed
(Figure 4B).
Our results raise the possibility that geometric features deter-
mine the important neural mechanisms widely observed in the
brain. For instance, the stochastic gradient relative distance to
the visual input may reflect the dynamics of the neural circuitry
in half a second (Figure 4B). In contrast, STRt deterministically
showed rotational geometries in the bootstrap replicates (Fig-
ure 4B, reddish clusters occupy almost 70% of replicates). The
difference between stochastic/deterministic observations of ge-
ometry indicates two possible functional meanings. One possi-
bility is that neural population may process information with
curvy dynamics in some trials, but in other trials they process in-
formation with rotational dynamics. The other possibility is that
homogeneity/heterogeneity of neuronal activity in a population
may contribute to the variability in the replicates, though this
might not be likely because bootstrap resampling must provide
more similar replicates if the number of observations is large
(see, Hippocampal results, Table S1,n= 590 neurons which is
the largest population).
For summary of the present study (Figure 4B), the unidimen-
sional straight dynamics in the hippocampal-frontal circuitry
may reflect memory access during visual recognition, such as
location and reward. The rotational dynamics in the TE/STRt
might reflect the visual recognition process, during which recur-
rent feedback signals change the circuit dynamics. Future
studies should test the underlying functional mechanisms and
define whether the engagement of the change in behavior and/
or task context is best considered for whole-brain neural popu-
lation activity. Regardless of the mechanism, the shift of modu-
lation structures in the lower-dimensional neural space could
play a fundamental role in brain-wide information processing,
such as transforming visual feature recognition to memory
access.
Limitations of the study
Our result was meta-analysis, and thus, cannot be applied to un-
derstand individual cases.
RESOURCE AVAILABILITY
Lead contact
Further information and requests for resources and reagents should
be directed to and will be fulfilled by the lead contact, Hiroshi Yamada
(h-yamada@md.tsukuba.ac.jp).
Materials availability
No materials available in this study.
Data and code availability
All data and analysis codes used in this study are available in the supporting
files.
ACKNOWLEDGMENTS
The authors thank Takashi Kawai, Yoshiko Yabana, Yuki Suwa, and Shiho
Nishino for their technical assistance. We appreciate Shigeru Shinomoto for
his comments. Monkey FU was provided by NBRP ‘‘Japanese Monkeys’’
through the National Bio Resource Project of the MEXT, Japan. Funding:
This research was supported by JSPS KAKENHI (Grant Numbers JP:
15H05374, 22H04832), JST Moonshot R&D JPMJMS2294 (H.Y.), and the Na-
tional Natural Science Foundation of China (Grant 32271088) (Y.N.).
AUTHOR CONTRIBUTIONS
H.Y. conceptualized the study. H.Y., Y.N., T.M., O.H., and J.K. designed the
experiments. H.Y., H.C., J.K., Y.H., Y.I., and T.M. performed the experiments.
H.Y. and Y.T. developed the analytical tools. H.Y., H.C., J.K., and Y.H.
analyzed the data. H.Y., H.C., J.K., Y.H., and Y.T. wrote the manuscript. All au-
thors edited and approved the final version of the manuscript.
iScience 28, 111936, March 21, 2025 7
iScienc
e
Article
ll
OPEN ACCESS
DECLARATION OF INTERESTS
The authors declare no competing of interests.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include
the following:
dKEY RESOURCES TABLE
dEXPERIMENTAL MODEL AND STUDY PARTICIPANT DETAILS
dMETHOD DETAILS
BBehavioral task
BElectrophysiological recordings and data preprocessing
BStatistical analysis
BBehavioral analysis
BNeural analysis
BRate-coding model: Conventional analyses to detect neural modu-
lations in each neuron
BPopulation dynamics using principal component analysis
BGeometric patterns of the neural dynamics
dQUANTIFICATION AND STATISTICAL ANALYSIS
SUPPLEMENTAL INFORMATION
Supplemental information can be found online at https://doi.org/10.1016/j.isci.
2025.111936.
Received: September 3, 2024
Revised: October 31, 2024
Accepted: January 28, 2025
Published: January 31, 2025
REFERENCES
1. Felleman, D.J., and Van Essen, D.C. (1991). Distributed hierarchical pro-
cessing in the primate cerebral cortex. Cerebr. Cortex 1, 1–47. https://
doi.org/10.1093/cercor/1.1.1-a.
2. Chen, H., and Naya, Y. (2021). Reunification of Object and View-Center
Background Information in the Primate Medial Temporal Lobe. Front. Be-
hav. Neurosci. 15, 756801. https://doi.org/10.3389/fnbeh.2021.756801.
3. Buzsaki, G., McKenzie, S., and Davachi, L. (2022). Neurophysiology of
Remembering. Annu. Rev. Psychol. 73, 187–215. https://doi.org/10.
1146/annurev-psych-021721-110002.
4. Goense, J.B.M., and Logothetis, N.K. (2008). Neurophysiology of the
BOLD fMRI signal in awake monkeys. Curr. Biol. 18, 631–640. https://
doi.org/10.1016/j.cub.2008.03.054.
5. Hu, S., Ciliberti, D., Grosmark, A.D., Michon, F., Ji, D., Penagos, H., Buz-
sa
´ki, G., Wilson, M.A., Kloosterman, F., and Chen, Z. (2018). Real-Time
Readout of Large-Scale Unsorted Neural Ensemble Place Codes. Cell
Rep. 25, 2635–2642.e5. https://doi.org/10.1016/j.celrep.2018.11.033.
6. Steinmetz, N.A., Zatka- Haas, P., Carandini, M., and Harris, K.D. (2019).
Distributed coding of choice, action and engagement across the mouse
brain. Nature 576, 266–273. https://doi.org/10.1038/s41586-019-1787-x.
7. Climer, J.R., and Dombeck, D.A. (2021). Information Theoretic Ap-
proaches to Deciphering the Neural Code with Functional Fluorescence
Imaging. eNeuro 8, ENEURO.0266-21.2021. https://doi.org/10.1523/
ENEURO.0266-21.2021.
8. Paulk, A.C., Kfir, Y., Khanna, A.R., Mustroph, M.L., Trautmann, E.M., So-
per, D.J., Stavisky, S.D., Welkenhuysen, M., Dutta, B., Shenoy, K.V., et al.
(2022). Large-scale neural recordings with single neuron resolution using
Neuropixels probes in human cortex. Nat. Neurosci. 25, 252–263.
https://doi.org/10.1038/s41593-021-00997-0.
9. Manley, J., Lu, S., Barber, K., Demas, J., Kim, H., Meyer, D., Traub, F.M.,
and Vaziri, A. (2024). Simultaneous, cortex-wide dynamics of up to 1
million neurons reveal unbounded scaling of dimensionality with neuron
number. Neuron 112, 1694–1709.e5. https://doi.org/10.1016/j.neuron.
2024.02.011.
10. Dayan, P., and Abbott, L. (2001). Theoretical neuroscience: computational
and mathematical modeling of neural systems (MIT press).
11. Sanz-Leon, P., Knock, S.A., Spiegler, A., and Jirsa, V.K. (2015). Mathemat-
ical framework for large-scale brain network modeling in The Virtual Brain.
Neuroimage 111, 385–430. https://doi.org/10.1016/j.neuroimage.2015.
01.002.
12. Yuste, R. (2015). From the neuron doctrine to neural networks. Nat. Rev.
Neurosci. 16, 487–497. https://doi.org/10.1038/nrn3962.
13. Vyas, S., Golub, M.D., Sussillo, D., and Shenoy, K.V. (2020). Computation
Through Neural Population Dynamics. Annu. Rev. Neurosci. 43, 249–275.
https://doi.org/10.1146/annurev-neuro-092619-094115.
14. Humphries, M.D. (2021). Strong and weak principles of neural dimension
reduction. Preprint at arXiv. https://doi.org/10.51628/001c.24619.
15. Shenoy, K.V., and Kao, J.C. (2021). Measurement, manipulation and
modeling of brain-wide neural population dynamics. Nat. Commun. 12,
633. https://doi.org/10.1038/s41467-020-20371-1.
16. Timothy, L.M.K., and Bona, B.E. (1968). State Space Analysis: An Intro-
duction (McGraw-Hill).
17. Brendel, W., Romo, R., and Machens, C.K. (2011). Demixed Principal
Component Analysis. Adv. Neural Inf. Process. Syst. 24, 2654–2662.
18. Churchland, M.M., Cunningham, J.P., Kaufman, M.T., Foster, J.D., Nuyu-
jukian, P., Ryu, S.I., and Shenoy, K.V. (2012). Neural population dynamic s
during reaching. Nature 487, 51–56. https://doi.org/10.1038/nature11129.
19. Mante, V., Sussillo, D., Shenoy, K.V., and Newsome, W.T. (2013). Context-
dependent computation by recurrent dynamics in prefrontal cortex. Na-
ture 503, 78–84. https://doi.org/10.1038/nature12742.
20. Gao, P., and Ganguli, S. (2015). On simplicity and complexity in the brave
new world of large-scale neuroscience. Curr. Opin. Neurobiol. 32,
148–155. https://doi.org/10.1016/j.conb.2015.04.003.
21. Rossi-Pool, R., and Romo, R. (2019). Low Dimensionality, High Robust-
ness in Neural Population Dynamics. Neuron 103, 177–179. https://doi.
org/10.1016/j.neuron.2019.06.021.
22. Okazawa, G., Hatch, C.E., Mancoo, A., Machens, C.K., and Kiani, R.
(2021). Representational geometry of perceptual decisions in the monkey
parietal cortex. Cell 184, 3748–3761.e18. https://doi.org/10.1016/j.cell.
2021.05.022.
23. Yamada, H., Imaizumi, Y., and Matsumoto, M. (2021). Neural Population
Dynamics Underlying Expected Value Computation. J. Neurosci. 41,
1684–1698. https://doi.org/10.1523/JNEUROSCI.1987-20.2020.
24. Libby, A., and Buschman, T.J. (2021). Rotational dynamics reduce inter-
ference between sensory and memory representations. Nat. Neurosci.
24, 715–726. https://doi.org/10.1038/s41593-021-00821-9.
25. Pellegrino, A., Stein, H., and Cayco-Gajic, N.A. (2024). Dimensionality
reduction beyond neural subspaces with slice tensor component analysis.
Nat. Neurosci. 27, 1199–1210. https://doi.org/10.1038/s41593-024-
01626-2.
26. Mukherjee, S., and Babadi, B. (2024). Adaptive modeling and inference of
higher-order coordination in neuronal assemblies: A dynamic greedy esti-
mation approach. PLoS Comput. Biol. 20, e1011605. https://doi.org/10.
1371/journal.pcbi.1011605.
27. Chang, Y.J., Chen, Y.I., Yeh, H.C., and Santacruz, S.R. (2024). Neurobio-
logically realistic neural network enables cross-scale modeling of neural
dynamics. Sci. Rep. 14, 5145. https://doi.org/10.1038/s41598-024-
54593-w.
28. Vahidi, P., Sani, O.G., and Shanechi, M.M. (2024). Modeling and dissoci-
ation of intrinsic and input-driven neural population dynamics underlying
behavior. Proc. Natl. Acad. Sci. USA 121, e2212887121. https://doi.org/
10.1073/pnas.2212887121.
8iScience 28, 111936, March 21, 2025
iScienc
e
Article
ll
OPEN ACCESS
29. Chen, H., Kunimatsu, J., Oya, T., Imaizumi, Y., Hori, Y., Matsumoto, M.,
Minamimoto, T., Naya, Y., and Yamada, H. (2023). Stable Neural Popula-
tion Dynamics in the Regression Subspace for Continuous and Categori-
cal Task Parameters in Monkeys. eNeuro 10, ENEURO.0016-23.2023.
https://doi.org/10.1523/ENEURO.0016-23.2023.
30. Khilkevich, A., Lohse, M., Low, R., Orsolic, I., Bozic, T., Windmill, P., and
Mrsic-Flogel, T.D. (2024). Brain-wide dynamics linking sensation to action
during decision-making. Nature 634, 890–900. https://doi.org/10.1038/
s41586-024-07908-w.
31. Saleem, K.S., and Tanaka, K. (1996). Divergent projections from the ante-
rior inferotemporal area TE to the perirhinal and entorhinal cortices in the
macaque monkey. J. Neurosci. 16, 4757–4775.
32. Naya, Y., Yoshida, M., and Miyashita, Y. (2003). Forward processing of
long-term associative memory in monkey inferotemporal cortex.
J. Neurosci. 23, 2861–2871.
33. Suzuki, W.A., and Naya, Y. (2014). The perirhinal cortex. Annu. Rev. Neu-
rosci. 37, 39–53. https://doi.org/10.1146/annurev-neuro-071013-014207.
34. Sasikumar, D., Emeric, E., Stuphorn, V., and Connor, C.E. (2018). First-
Pass Processing of Value Cues in the Ventral Visual Pathway. Curr. Biol.
28, 538–548.e3. https://doi.org/10.1016/j.cub.2018.01.051.
35. Aoi, M.C., Mante, V., and Pillow, J.W. (2020). Prefrontal cortex exhibits
multidimensional dynamic encoding during decision-making. Nat. Neuro-
sci. 23, 1410–1420. https://doi.org/10.1038/s41593-020-0696-5.
36. Efron, B., and Tibshirani, R.J. (1993). An Introduction to the Bootstrap
(Chapman & Hall/CRC).
37. Palmer, K., Ridgway, T., Al-Rawi, O., Johnson, I., and Poullis, M. (2011).
Lissajous figures: an engineering tool for root cause analysis of individual
cases–a preliminary concept. J. Extra Corpor. Technol. 43, 153–156.
38. Osako, Y., Ohnuki, T., Tanisumi, Y., Shiotani, K., Manabe, H., Sakurai, Y.,
and Hirokawa, J. (2021). Contribution of non-sensory neurons in visual
cortical areas to visually guided decisions in the rat. Curr. Biol. 31,
2757–2769.e6. https://doi.org/10.1016/j.cub.2021.03.099.
39. Ames, K.C., Ryu, S.I., and Shenoy, K.V. (2014). Neural dynamics of reach-
ing following incorrect or absent motor preparation. Neuron 81, 438–451.
https://doi.org/10.1016/j.neuron.2013.11.003.
40. Marshall, N.J., Glaser, J.I., Trautmann , E.M., Amematsro, E.A., Perkins,
S.M., Shadlen, M.N., Abbott, L.F., Cunningham, J.P., and Churchland,
M.M. (2022). Flexible neural control of motor units. Nat. Neurosci. 25,
1492–1504. https://doi.org/10.1038/s41593-022-01165-8.
41. Aoi, M.C., and Pillow, J.W. (2018). Model-based targeted dimensionality
reduction for neuronal population data. Adv. Neural Inf. Process. Syst.
31, 6690–6699.
42. Fan, Y., Gold, J.I., and Ding, L. (2020). Frontal eye field and caudate neu-
rons make different contributions to reward-biased perceptual decisions.
Elife 9, e60535. https://doi.org/10.7554/eLife.60535.
43. Safaie, M., Chang, J.C., Park, J., Miller, L.E., Dudman, J.T., Perich, M.G.,
and Gallego, J.A. (2023). Preserved neural dynamics across animals per-
forming similar behaviour. Nature 623, 765–771. https://doi.org/10.1038/
s41586-023-06714-0.
44. Melbaum, S., Russo, E., Eriksson, D., Schneider, A., Durstewitz, D., Brox,
T., and Diester, I. (2022). Conserved structures of neural activity in senso-
rimotor cortex of freely moving rats allow cross-subject decoding. Nat.
Commun. 13, 7420. https://doi.org/10.1038/s41467-022-35115-6.
45. Allen, C.E., Beldade, P., Zwaan, B.J., and Brakefield, P.M. (2008). Differ-
ences in the selection response of serially repeated color pattern charac-
ters: standing variation, development, and evolution. BMC Evol. Biol. 8,
94. https://doi.org/10.1186/1471-2148-8-94.
46. Tolhurst, D.J., and Movshon, J.A. (1975). Spatial and temporal contrast
sensitivity of striate cortical neurones. Nature 257, 674–675. https://doi.
org/10.1038/257674a0.
47. Jones, J.P., and Palmer, L.A. (1987). An evaluation of the two-dimensional
Gabor filter model of simple receptive fields in cat striate cortex.
J. Neurophysiol. 58, 1233–1258. https://doi.org/10.1152/jn.1987.58.
6.1233.
48. Georgopoulos, A.P., Kalaska, J.F., Caminiti, R., and Massey, J.T. (1982).
On the relations between the direction of two-dimensional arm move-
ments and cell discharge in primate motor cortex. J. Neurosci. 2,
1527–1537.
49. Fetz, E.E., and Cheney, P.D. (1980). Postspike facilitation of forelimb mus-
cle activity by primate corticomotoneuronal cells. J. Neurophysiol. 44,
751–772. https://doi.org/10.1152/jn.1980.44.4.751.
50. Platt, M.L., and Glimcher, P.W. (1999). Neural correlates of decision vari-
ables in parietal cortex. Nature 400, 233–238.
51. Yamada, H., Louie, K., Tymula, A., and Glimcher, P.W. (2018). Free choice
shapes normalized value signals in medial orbitofrontal cortex. Nat. Com-
mun. 9, 162. https://doi.org/10.1038/s41467-017-02614-w.
52. Imaizumi, Y., Tymula, A., Tsubo, Y., Matsumoto, M., and Yamada, H.
(2022). A neuronal prospect theory model in the brain reward circuitry.
Nat. Commun. 13, 5855. https://doi.org/10.1038/s41467-022-33579-0.
53. Yamada, H., Inokawa, H., Matsumoto, N., Ueda, Y., Enomoto, K., and Ki-
mura, M. (2013). Coding of the long-term value of multiple future rewards in
the primate striatum. J. Neurophysiol. 109, 1140–1151. https://doi.org/10.
1152/jn.00289.2012.
54. Yamada, H., Inokawa, H., Matsumoto, N., Ueda, Y., and Kimura, M. (2011).
Neuronal basis for evaluating selected action in the primate striatum. Eur.
J. Neurosci. 34, 489–506. https://doi.org/10.1111/j.1460-9568.2011.
07771.x.
55. O’Keefe, J., and Dostrovsky, J. (1971). The hippocampus as a spatial map.
Preliminary evidence from unit activity in the freely-moving rat. Brain Res.
34, 171–175. https://doi.org/10.1016/0006-8993(71)90358-1.
56. Kobak, D., Brendel, W., Constantinidis, C., Feierstein, C.E., Kepecs, A.,
Mainen, Z.F., Qi, X.L., Romo, R., Uchida, N., and Machens, C.K. (2016).
Demixed principal component analysis of neural population data. Elife 5,
e10989. https://doi.org/10.7554/eLife.10989.
57. Murray, J.D., Bernacchia, A., Roy, N.A., Constantinidis, C., Romo, R., and
Wang, X.J. (2017). Stable population coding for working memory coexists
with heterogeneous neural dynamics in prefrontal cortex. Proc. Natl. Acad.
Sci. USA 114, 394–399. https://doi.org/10.1073/pnas.1619449114.
58. Rossi-Pool, R., Zainos, A., Alvarez, M., Diaz-deLeon, G., and Romo, R.
(2021). A continuum of invariant sensory and behavioral-context percep-
tual coding in secondary somatosensory cortex. Nat. Commun. 12,
2000. https://doi.org/10.1038/s41467-021-22321-x.
59. Hays, A., Richmond, B., and Optican, L. (1982). Unix-based multiple pro-
cess system for real-time data aquisition and control. WESCON 2, 1–10.
60. Naya, Y., and Suzuki, W.A. (2011). Integrating what and when across the
primate medial temporal lobe. Science 333, 773–776. https://doi.org/10.
1126/science.1206773.
61. Yamada, H., Inokawa, H., Hori, Y., Pan, X., Matsuzaki, R., Nakamura, K.,
Samejima, K., Shidara, M., Kimura, M., Sakagami, M., and Minamimoto,
T. (2016). Characteristics of fast-spiking neurons in the striatum of
behaving monkeys. Neurosci. Res. 105, 2–18. https://doi.org/10.1016/j.
neures.2015.10.003.
62. Yamamoto, S., Monosov, I.E., Yasuda, M., and Hikosaka, O. (2012). What
and where information in the caudate tail guides saccades to visual ob-
jects. J. Neurosci. 32, 11005–11016. https://doi.org/10.1523/JNEURO-
SCI.0828-12.2012.
63. Kunimatsu, J., Maeda, K., and Hikosaka, O. (2019). The Caudal Part of Pu-
tamen Represents the Historical Object Value Information. J. Neurosci. 39,
1709–1719. https://doi.org/10.1523/JNEUROSCI.2534-18.2018.
64. Kunimatsu, J., Yamamoto, S., Maeda, K., and Hikosaka, O. (2021). Envi-
ronment-based object values learned by local network in the striatum
tail. Proc. Natl. Acad. Sci. USA 118, e2013623118. https://doi.org/10.
1073/pnas.2013623118.
iScience 28, 111936, March 21, 2025 9
iScienc
e
Article
ll
OPEN ACCESS
65. Chen, X., and Stuphorn, V. (2015). Sequential selection of economic good
and action in medial frontal cortex of macaques during value-based deci-
sions. Elife 4, e09418. https://doi.org/10.7554/eLife.09418.
66. Yamada, H., Matsumoto, N., and Kimura, M. (2004). Tonically active
neurons in the primate caudate nucleus and putamen differentially
encode instructed motivational outcomes of action. J. Neurosci. 24,
3500–3510.
67. Yamada, H., Matsumoto, N., and Kimura, M. (2007). History- and current
instruction-based coding of forthcoming behavioral outcomes in the stria-
tum. J. Neurophysiol. 98, 3557–3567.
68. Inokawa, H., Matsumoto, N., Kimura, M., and Yamada, H. (2020). Tonically
Active Neurons in the Monkey Dorsal Striatum Signal Outcome Feedback
during Trial-and-error Search Behavior. Neuroscience 446, 271–284.
https://doi.org/10.1016/j.neuroscience.2020.08.007.
69. Chen, H., and Naya, Y. (2020). Forward Processing of Object-Location As-
sociation from the Ventral Stream to Medial Temporal Lobe in Nonhuman
Primates. Cerebr. Cortex 30, 1260–1271. https://doi.org/10.1093/cercor/
bhz164.
70. Hori, Y., Mimura, K., Nagai, Y., Fujimoto, A., Oyama, K., Kikuchi, E., Inoue,
K.I., Takada, M., Suhara, T., Richmond, B.J., and Minamimoto, T. (2021).
Single caudate neurons encode temporally discounted value for formu-
lating motivation for action. Elife 10, e61248. https://doi.org/10.7554/eL-
ife.61248.
71. Minamimoto, T., La Camera, G., and Richmond, B.J. (2009). Measuring
and modeling the interaction among reward size, delay to reward, and
satiation level on motivation in monkeys. J. Neurophysiol. 101, 437–447.
72. Burnham, K.P., and Anderson, D.R. (2004). Multimodel inference: under-
standing AIC and BIC in model selection. Socio. Methods Res. 33,
261–304.
10 iScience 28, 111936, March 21, 2025
iScienc
e
Article
ll
OPEN ACCESS
STAR+METHODS
KEY RESOURCES TABLE
EXPERIMENTAL MODEL AND STUDY PARTICIPANT DETAILS
Subjects and experimental procedures are as follows. Nine rhesus monkeys were used in the present study (Exp. 1: Macaca mulatta,
A, 9.3 kg, male; Macaca mulatta, D, 9.5 kg, male; Exp 2: Macaca mulatta, WK, 12.0 kg, male; Macaca mulatta, SP, 7.0 kg, male; Exp 3:
Macaca mulatta, BI, 8.2 kg, male; Macaca mulatta, FG, 11.0 kg, male; Macaca mulatta, ST, 5.2 kg, male; Exp 4: Macaca mulatta, SUN,
7.1 kg, male; Macaca fuscata, FU, 6.7 kg, female). All experimental procedures were approved by the Institutional Animal Care and
Use Committee of Laboratory Animals approved by Peking University (Exp. 1, project number Psych-YujiNaya-1), and Animal Care
and Use Committee of the National Eye Institute, and complied with the Public Health Service Policy on the Humane Care and Use of
Laboratory Animals (Exp. 2, protocol number NEI-622), the Animal Ethics Committee of the National Institutes for Quantum Science
and Technology (Exp. 3, protocol no. 11-1038-11), and the Animal Care and Use Committee of the University of Tsukuba (Exp. 4,
protocol no 23-057). All procedures were performed in reference to the US Public Health Service Guide for the Care and Use of Lab-
oratory Animals.
METHOD DETAILS
Behavioral task
Exp. 1. Item-location-retention (ILR) task
The animals performed the task under dim light conditions in an electromagnetically shielded room. The task started with an encod-
ing phase, which was initiated by the animal pulling a lever and fixating on a white square (0.6) presented within one of four quadrants
at 12.5(monkey A) or 10(monkey D) from the center of the touchscreen (3M
TM
MicroTouch
TM
Display M1700SS, 17 in), situated
approximately 28 cm from the subjects. The eye position was monitored using an infrared digital camera with asampling frequency of
120 Hz (ETL-200, ISCAN). After fixation for 0.6 s, one of six items (3.0for monkey A and 2.5for monkey D, radius) was presented in
the same quadrant as a sample stimulus for 0.3 s, followed by another 0.7 s fixation on the white square. If the fixation was success-
fully maintained (typically < 2.5), the encoding phase ended with the presentation of a single drop of water.
The encoding phase was followed by a blank interphase delay interval of 0.7–1.4 s during which no fixation was required. The
response phase was initiated using a fixation dot presented at the center of the screen. One of the six items was then presented
at the center for 0.3 s, as a cue stimulus. After another 0.5 s delay period, five disks were presented as choices, including a blue
disk in each quadrant and a green disk in the center. When the cue stimulus was the same as the sample stimulus, the animal
was required to make a choice by touching the blue disk in the same quadrant as the sample (i.e., the match condition). Otherwise,
the subject was required to choose the green disk (i.e., non-match condition). If the animal made the correct choice, four–eight drops
of water were provided as a reward; otherwise, an additional 4 s was added to the standard inter-trial interval (1.5–3 s). The number of
reward drops was increased to encourage the animal to maintain good performance in the latter phase of a daily recording session,
which was typically conducted in blocks (e.g., a minimal set of 60 trials with equal numbers of visual items presented in a match/non-
match condition). During the trial, a large gray square (48on each side) was presented at the center of the display as a background.
At the end of the trial, all stimuli disappeared, and the entire screen displayed a light red color during the inter-trial interval. The start of
a new trial was indicated by the reappearance of a large gray square on the display, at which point the monkey could pull the lever,
triggering the appearance of a white fixation dot.
In the match condition, sample stimuli were chosen pseudo-randomly from six well-learned visual items, and each item was pre-
sented pseudo-randomly within four quadrants, resulting in 24 (6 34) configuration patterns. In the non-match condition, the location
of the sample stimulus was randomly chosen from the four quadrants, and the cue stimulus was randomly chosen from the remaining
five items that differed from the sample. The match and non-match conditions were randomly presented in a ratio of 4:1, resulting in
30 (24 + 6) configuration patterns. The same six stimuli were used during all the recording sessions.
Exp. 2. Scene-based object-value task
Animals learned the scene-object associations. After the monkeys fixated on the red-square fixation point on the scene image for
0.6–1 s, the fixation cue disappeared, and two visual items (objects of different values) appeared simultaneously in a different
REAGENT or RESOURCE SOURCE IDENTIFIER
Software and algorithms
R software 4.4 R project https://www.r-project.org/
MATLAB 2020b MathWorks Inc. https://mathworks.com/products/matlab.html
Adobe Illustrator CS6 Adobe https://www.adobe.com/products/illustrator.htm
iScience 28, 111936, March 21, 2025 e1
iScienc
e
Article
ll
OPEN ACCESS
hemifield (for training and neuronal testing) or the same hemifield (for pharmacological experiments). A reward was given after the
monkeys made a saccade to the stimulus and maintained fixation for 0.2 s. Half of the fractal visual items were associated with a
large reward (0.3 mL), and the other half were associated with a small reward (0.1 mL). This reward association changed depending
on the scene (Figure S1D). Passive Viewing Task. One of the two scene images was presented for 0.8 s randomly. If the monkey
fixated on a central red square, two to four fractals were presented sequentially on the scene image within the neuron’s receptive
field (presentation time, 0.4 s; interstimulus interval, 0.4 s; Figure S1C). A liquid reward (0.2 mL) was delivered 0.3 s after the last object
was presented. Thus, reward occurrence was not associated with any of the visual items. Each item was presented at least seven
times per session.
Exp. 3. Delayed reward tasks
The monkeys were seated on a primate chair inside a dark, sound-attenuated, electrically shielded room. A touch-sensitive bar was
mounted on the chair. The visual stimuli were displayed on a computer video monitor placed in front of the animals. Each of the six
cues was associated with a combination of reward size (1 drop; 3 or 4 drops) and reward delay (0, 3.3, and 6.9 s). The trials began
when the monkey touched the bar. A visual cue appeared, and the monkey released a bar when a red spot (waiting signal) turned
green (go signal) after a variable interval. If the monkey released the bar 0.2–1 s after this go signal, the trial was considered correct
and the spot turned blue (correct signal). A liquid reward of a small (1 drop, approximately 0.1 mL) or large amount (3 drops, except for
monkey BI, 4 drops) was delivered immediately (0.3 ±0.1 s) or with an additional delay of either 3.3 ±0.6 s or 6.9 ±1.2 s after correct
release of the bar. The cues were chosen with equal probability and were independent of the preceding reward condition. Anticipa-
tory bar releases (before or no later than 0.2 s after the appearance of the go signal) and failure to release the bar within 1 s of the
appearance of the go signal were counted as errors. In the error trials, the trial was terminated immediately, all visual stimuli disap-
peared, and following inter trial interval (1 s), the trial was repeated; that is, the reward size/delay combination remained the same as
that in the error trial. Behavioral control and data acquisition were performed using a real-time experimentation system (REX).
59
The
Neurobehavioral Systems Presentation software was used to display the visual stimuli (Neurobehavioral Systems).
Exp. 4. Cued lottery tasks
The animals performed one of two visually cued lottery tasks: a single-cue or a choice task. Neuronal activity was only recorded dur-
ing the single-cue task.
Animals performed the task under dim lighting conditions in an electromagnetically shielded room. Eye movements were
measured using a video camera system at 120 Hz (EyeLink, SR Research). Visual stimuli were generated using a liquid-crystal display
at 60 Hz, placed 38 cm from the monkey’s face when seated. At the beginning of the single-cue task trials, the monkeys had 2 s to
align their gaze within 3of a 1-diameter gray central fixation target. After a fixation for 1 s, a pie chart was presented for 2.5 s, to
provide information regarding the probability and magnitude of rewards in the same location as the central fixation target. The prob-
ability and magnitude of the rewards were associated with the number of blue and green 8pie chart segments, ranging from 0.1 to
1.0 mL in 0.1 mL increments for magnitude, and 0.1 to 1.0 in 0.1 increments for probability. Following a 0.2 s interval from the removal
of the pie chart, a 1 kHz or 0.1 kHz tone of 0.15 s duration was provided to indicate reward or no-reward outcomes, respectively. After
a 0.2 s interval following the high tone, a fluid reward was delivered, whereas no rewards were delivered following the low tone. An
inter-trial interval of 4–6 s was used. During the choice task, animals were instructed to choose one of two peripheral pie charts, each
of which indicated either the probability or magnitude of an upcoming reward. The two target options were presented for 2.5 s at 8to
the left or right of the central fixation location. The animals received a fluid reward as indicated by the green pie chart of the chosen
target, with the probability indicated by the blue pie chart. Otherwise, no reward was delivered.
A total of 100 pie charts composed of 10 levels of probability and magnitude of rewards were used in the experiments. In the single-
cue task, 100 pie charts were presented once in random order. In the choice task, two pie charts were randomly assigned to the two
options. During one electrophysiological recording session, approximately 30–60 trial blocks of the choice task were interleaved with
100–120 trial blocks of the single-cue task.
Electrophysiological recordings and data preprocessing
Exp. 1
To record the single-unit activity, we used a 16-channel vector array microprobe (V1 X 16-Edge, NeuroNexus), 16-channel U-Probe
(Plexon), tungsten tetrode probe (Thomas RECORDING), or single-wire tungsten microelectrode (Alpha Omega). Electrophysiolog-
ical signals were amplified, bandpass-filtered (200–6000 Hz), and monitored. Single-neuron activity was isolated based on spike
waveforms, either online or offline. For both clustering and offline sorting, the activities of all single neurons were sampled when
the activity of an isolated neuron demonstrated a good signal-to-noise ratio (>2.5). The signal-to-noise ratio was visually checked
by calculating the range of background noise against the spike amplitude, which was monitored online using the OmniPlex Neural
Data Acquisition System, or offline using the sorter software Plexon. The recorded neurons were not blinded. The sample sizes
required to detect the effect sizes (numbers of recorded neurons, recorded trials in a single neuron, and monkeys) were estimated
based on previous studies.
32,60
Neural activity was recorded during 60–240 trials of the ILR task. We recorded 590 hippocampal neu-
rons, among which the recording sites appeared to cover all subdivisions (i.e., the dentate gyrus, CA3, CA1, and subicular complex).
Exp. 2
We used conventional techniques to record the single-neuron activity in the STRt, including the caudate and putamen tails. A tung-
sten microelectrode (1–3 MUFrHC; 0.5-1.5 MUAlpha Omega Engineering) was used to record single-neuron activity. The recording
e2 iScience 28, 111936, March 21, 2025
iScienc
e
Article
ll
OPEN ACCESS
site was determined using a grid system that allowed electrode penetration at 1 mm intervals. We amplified and filtered (0.3 to 10 kHz;
Model 1800, A-M Systems; Model MDA-4I, BAK) signals obtained from the electrodes and collected at 1 kHz. Single neurons were
isolated online using custom voltage–time window discriminator software (Blip; available at http://www.robilis.com/blip/). The pre-
sumed medium spiny neurons were identified based on their low baseline activity (<3 spikes/s) and broad action potentials.
61
The
recorded neurons were not blinded. The sample sizes required to detect the effect sizes (numbers of recorded neurons, recorded
trials in a single neuron, and monkeys) were estimated based on previous studies.
62,63
Neural activity was recorded during 10–30
trials of the passive viewing task. We recorded 115 medium spiny neurons in the STRt. In Exp. 2, only a single-neuron recording
was performed online. We note that we termed the scene and object for two visual stimuli in our previous study,
64
but here we termed
them scene and item.
Exp. 3
Conventional techniques were used to record single-neuron activity in the dorsal part of the head of the caudate nucleus (CD). A tung-
sten microelectrode (1.1–1.5 MU, Microprobes for Life Science; 1.0 MU, Alpha Omega Engineering Ltd.) was used to record single-
neuron activity. The electrophysiological signals were amplified and monitored using a TDT recording system (RZ2, Tucker-Davis
Technologies, TDT). Single-neuron activity was manually isolated based on the online spike waveforms. The activity of all single neu-
rons was sampled from the activity of presumed projection neurons, which are characterized as having a low spontaneous discharge
rate (<2 spikes/s) outside the task context and exhibiting phasic discharges in relation to one or more behavioral task events.
61
Neural
activity was recorded during 100–120 trials per block in the delayed-reward task. We recorded the CD of the left or right hemisphere
in each of the three monkeys in the experiment, with 150 CD neurons (51, 31, and 68 from the BI, FG, and ST, respectively).
Exp. 4
Conventional techniques were used to record single-neuron activity in the DS, VS, cOFC (area 13M), and mOFC (area 14o). A tung-
sten microelectrode (1–3 MU, FHC) was used to record single-neuron activity. Electrophysiological signals were amplified, band-
pass filtered (50–3,000 Hz), and monitored using a TDT recording system (RZ5D, Tucker-Davis Technologies, TDT). Single-neuron
activity was manually isolated based on the online spike waveforms. The activity of all single neurons was sampled when the ac-
tivity of an isolated neuron demonstrated a good signal-to-noise ratio (>2.5). The signal-to-noise ratio was calculated online as the
ratio of the spike amplitude to the baseline voltage range on the oscilloscope. The recorded neurons were not blinded. The sample
sizes required to detect the effect sizes (numbers of recorded neurons, recorded trials in a single neuron, and monkeys) were esti-
mated based on previous studies.
51,53,65
Neural activity was recorded during 100–120 trials of the single-cue task. Neural activity
was not recorded during selection trials. We recorded the neurons of a single right hemisphere in each of the two monkeys: 194 DS
neurons (98 and 96 from monkeys SUN and FU, respectively), 144 VS neurons (89 SUN and 55 FU), 190 cOFC neurons (98 SUN and
92 FU), and 158 mOFC neurons (64 SUN and 94 FU). In Exp. 1, only a single-neuron recording was performed online. We recorded
presumed medium spiny projection neurons from the DS and VS
23,54,66–68
and presumed pyramidal neurons in the cOFC and
mOFC.
23,51,52
Statistical analysis
For statistical analysis, we used the statistical software package MATLAB (MathWorks, Exps. 1 and 2), and R (Exps. 3 and 4) for con-
ventional analyses such as linear regression and ANOVA. To analyze the regression matrix using PCA, we used R software. All sta-
tistical tests for the neural analyses were two-tailed.
Behavioral analysis
No new behavioral results were included; however, the procedure for the behavioral analysis was as follows.
Exp. 1
We previously reported that two monkeys learned to retain the item and location information of a sample stimulus.
69
Here, we
describe the analysis steps used to check whether the monkey used both item and location information to perform the task.
To examine this, we compared the animals’ actual correct rates during the recording to random correct rates (chi-square test). The
ILR response phase had five options, resulting in a 20% random correct rate. If the animal used an incorrect strategy, such as only
retaining the location information of the sample stimulus and ignoring the item information, the correct rate for the match condition
would be 100% and that for the nonmatch condition would be 0. Based on the above considerations, we examined the correct rates
of the two animals in the match and nonmatch conditions, respectively. In general, the average correct rates for both animals in the
match and nonmatch conditions were well above chance levels after training.
Exp. 2
We previously reported that two monkeys switched their behavior depending on the value of the item based on the scene.
64
Here, we
describe how to check whether the monkey learned both the scene and item information. We calculated the correct rate for the
scene-based object-value task. Because the two scenes appear in random sequences, the monkey must switch object choice if
the scene has changed. After performing more than 160 trials, the correct rate reached a plateau above chance. The monkey
was able to switch object choices immediately after the scene changed. Once the monkeys learned this extensively, their choice
behavior became automatic, as the choice tended to occur even when the reward was not delivered after saccades to high-valued
items according to the scene.
iScience 28, 111936, March 21, 2025 e3
iScienc
e
Article
ll
OPEN ACCESS
Exp. 3
We previously reported that the three monkeys behaved based on temporally discounted values that integrated both delay and
reward size information provided by visual stimuli.
70
Here, we describe an analysis to check how monkeys discount reward values
for delay and reward information. Error rates in task performance were calculated by dividing the total number of errors by the total
number of trials for each reward condition and then averaged across all sessions. The average error rates were fitted to the inverse
function of reward size with hyperbolic temporal discounting: E=1+kD/aR (E: average error rates, D: delay, R: reward size, k: dis-
counting factor, a: incentive impact), and exponential temporal discounting: E=e
-kd
/aR. We used the ‘optim’ function in R, evaluated
the goodness of fit of the two models by least-squares minimization, and compared the models by leave-one-out cross-validation as
described previously (Minamimoto et al.,
71
2009).
Exp. 4
We previously reported that monkey behavior depends on expected values, defined as the probability time magnitude.
23
We
described the analysis steps to check whether the monkey’s behavior reflected task parameters, such as reward probability and
magnitude. Importantly, we showed that the monkey’s choice behavior reflected the expected values of the rewards, that is, the
probability multiplied by the magnitude. For this purpose, the percentage choosing the right-side option was analyzed in the pooled
data using a general linear model with a binomial distribution:
PchoosesR=1=ð1+ezÞ(Equation 3)
where the relationship between Pchooses
R
and Zis given by the logistic function in each of the following three models: number of pie
segments (M1), probability and magnitude (M2), and expected values (M3).
M1 :Z=b0+b1NpieL+b2NpieR(Equation 4)
where b
0
is the intercept, and Npie
L
and Npie
R
are the number of pie segments contained in the left and right pie chart stimuli, respec-
tively. The values of b
0
to b
2
are free parameters and estimated by maximizing the log likelihood.
M2 :Z=b0+b1PL+b2PR+b3ML+b4MR(Equation 5)
where b
0
is the intercept; P
L
and P
R
are the probabilities of rewards for the left and right pie chart stimuli, respectively; and M
L
and M
R
are the magnitudes of rewards for the left and right pie chart stimuli, respectively. The values of b
0
to b
4
are free parameters and esti-
mated by maximizing the log likelihood.
M3 :Z=b0+b1EVL+b2EVR(Equation 6)
where b
0
is the intercept and EV
L
and EV
R
are the expected values of rewards as probability multiplied by magnitude for the left and
right pie chart stimuli, respectively. The values of b
0
to b
2
are free parameters and estimated by maximizing the log likelihood. We
identified the best model to describe monkey behavior by comparing goodness-of-fit based on Akaike’s information criterion and
Bayesian information criterion.
72
Neural analysis
Peri-stimulus time histograms were constructed for each single-neuron activity aligned at the onset of the visual stimulus. Average
activity curves were smoothed for visual inspection using a Gaussian kernel (s= 20, 15, 10, and 50 ms in Exps. 1–4, respectively),
whereas the Gaussian kernel was not used for statistical tests.
To ensure that the four different datasets were as fair as possible, we used the same criteria to analyze the neural activity. For the
neural analyses, we used the following four criteria: 1) the same analysis window size, 2) visual response within a short time (0.6 s), 3)
neural modulations detected at the same significance level (P < 0.05), and 4) a general linear model (ANOVA in Exps. 1 and 2 and the
linear regression in Exp. 3 and 4). The details of these analytical procedures for the rate coding and dynamic models are shown below.
Rate-coding model: Conventional analyses to detect neural modulations in each neuron
Exp. 1
For neural responses during the encoding phase after the sample presentation, we evaluated the effects of ‘‘item’’ and ‘‘location’’ for
each neuron using two-way ANOVA (P< 0.05 for each). We analyzed neurons that were tested in at least 60 trials (10 trials for each
stimulus and 15 trials for each location). On average, we tested 100 trials for each neuron. These results have been previously
reported.
69
Exp. 2
For neural responses during the appearance of the visual item, we evaluated the effects of ‘‘item’’ and ‘‘scene’’ for each neuron using
paired t-test (P< 0.05 with Bonferroni correction). These results have been previously reported.
64
Exp. 3
The neural discharge rates (F) were fitted using a linear combination of the following variables:
F=b0+bdDelay +bmMagnitude (Equation 7)
e4 iScience 28, 111936, March 21, 2025
iScienc
e
Article
ll
OPEN ACCESS
where Delay and Magnitude are the delay and magnitude of the reward, respectively, as indicated by the visual stimulus. b
0
is the
intercept. If b
d
and b
m
were not zero at P< 0.05, the discharge rates were regarded as being significantly modulated by that variable.
These results have been previously reported.
70
Exp. 4
The neural discharge rates (F) were fitted using a linear combination of the following variables:
F=b0+bpProbability +bmMagnitude (Equation 8)
where Probability and Magnitude are the probability and magnitude of the rewards, respectively, as indicated by the pie chart. b
0
is
the intercept. If b
p
and b
m
were not zero at P< 0.05, the discharge rates were regarded as being significantly modulated by that var-
iable. These results have been previously reported.
23
Population dynamics using principal component analysis
We analyzed neural activity during an identical 0.6 s duration from the sample onset (Exp. 1), item onset (Exp. 2), CUE onset (Exp. 3),
and cue onset (Exp. 4). To obtain a time series of neural firing rates within this time period, we estimated the firing rates of each neuron
for every 0.05 s time bin (without overlap) during the analysis periods. A Gaussian kernel was not used.
Regression subspace
We used a general linear model to determine how items and locations (Exp. 1), items and scenes (Exp. 2), delay and magnitude of
rewards (Exp. 3), and the probability and magnitude of the rewards (Exp. 4) affect the activity of each neuron in the neural populations.
Each neural population was composed of all the recorded neurons in each brain region.
Exp. 1
First, we set six visual items and four locations as categorical variables. We then described the average firing rate of neuron iat time t
as a linear combination of the item and the location in each neural population:
Fði;t;kÞ=b0ði;tÞ+b1ði;tÞItemðkÞ+b2ði;tÞLocationðkÞ(Equation 9)
where F
(i,t,k)
is the average firing rate of neuron iat time t in trial k, Item
(k)
is the types of items cued to the monkey in trial k, and
Location
(k)
is the types of locations cued to the monkey in trial k. The regression coefficients b
0(i,t)
,b
1(i,t)
, and b
2(i,t)
describe the degree
to which the firing rates of neuron idepend on the mean firing rates (hence, firing rates independent of task variables, item, and loca-
tion), the degree of firing rate in each item relative to the mean firing rates, and the degree of firing in each location relative to the mean
firing rates, respectively, at a given time tduring the trials. The interaction term is not included in the model.
In the analysis, we performed preference ordering for item and location in each neuron. Item
(k)
and Location
(k)
were rank-ordered
items and locations, respectively, cued to the monkey in trial k. Items 1–6 and locations 1–4 were rank-ordered from the most
preferred to least preferred, respectively, defined as the mean firing rate during the entire analysis time window from 0.08 to
0.6 s. This preference ordering did not change over time tfor each neuron n.
Exp. 2
We first set eight items and two scenes as the categorical variables. We then described the average firing rate of neuron iat time tas a
linear combination of the item and scene in each neural population:
Fði;t;kÞ=b0ði;tÞ+b1ði;tÞItemðkÞ+b2ði;tÞSceneðkÞ(Equation 10)
where F
(i,t,k)
is the average firing rate of neuron iat time t in trial k, Item
(k)
is the types of items cued to the monkey in trial k, and Scene
(k)
is the types of scene stimuli cued to the monkey in trial k. The regression coefficients b
0(i,t)
,b
1(i,t)
and b
2(i,t)
describe the degree to
which the firing rates of neuron idepend on the mean firing rates (hence, firing rates independent of task variables, item and scene),
the degree of firing rate in each item relative to the mean firing rates, and the degree of firing in each scene relative to the mean firing
rates, respectively, at a given time tduring the trials. The interaction term was not included in the model.
In the analysis, Item
(k)
and Scene
(k)
were the rank-ordered item and scene, respectively, cued to the monkey in trial k. Items 1 to 8
and Scenes 1 and 2 were rank-ordered from the most preferred to least preferred, respectively, defined as the mean firing rate during
the whole analysis 0.6 s window after the item onset. This preference ordering did not change over time tfor each neuron n.
Exp. 3
We first set the delay and magnitude as 0, 3.3, and 6.9 s and one and three drops of rewards, respectively, during the behavioral task.
In the analysis, we normalized these values from 0 to 1 divided by the maximum values in each: 0, 0.48, and 1 for delay, and 0.33, 0.66,
and 1 for magnitude. This is because these values affect the extent of the regression subspace between two continuous variables.
We then described the average firing rate of neuron iat time tas a linear combination of the delay and magnitude in each neural pop-
ulation:
Fði;t;kÞ=b0ði;tÞ+b1ði;tÞDelayðkÞ+b2ði;tÞMagnitudeðkÞ(Equation 11)
where F
(i,t,k)
is the average firing rate of neuron iat time t in trial k, Delay
(k)
is the normalized delay to obtain a reward cued to the mon-
key in trial k, and Magnitude
(k)
is the normalized number of reward drops cued to the monkey in trial k. The regression coefficients
b
0(i,t)
to b
2(i,t)
describe the degree to which the firing rates of neuron idepend on the mean firing rates (hence, firing rates independent
of task variables), delay in rewards, and magnitude of rewards, respectively, at a given time tduring the trials.
iScience 28, 111936, March 21, 2025 e5
iScienc
e
Article
ll
OPEN ACCESS
Exp. 4
We first set the probability and magnitude as 0.1 to 1.0 and 0.1 to 1.0 mL, respectively. We did not normalize these values because
they were originally prepared from 0 to 1. We then describe the average firing rate of neuron iat time tas a linear combination of
probability and magnitude in each neural population:
Fði;t;kÞ=b0ði;tÞ+b1ði;tÞProbabilityðkÞ+b2ði;tÞMagnitudeðkÞ(Equation 12)
where F
(i,t,k)
is the average firing rate of neuron iat time t in trial k, Probability
(k)
is the probability of the reward cued to the monkey in
trial k, and Magnitude
(k)
is the magnitude of the reward cued to the monkey in trial k. The regression coefficients b
0(i,t)
to b
2(i,t)
describe
the degree to which the firing rates of neuron idepend on the mean firing rates (i.e., firing rates independent of task variables), prob-
ability of rewards, and magnitude of rewards, respectively, at a given time tduring the trials.
We used the regression coefficients (i.e., the regression table in the case of ANOVA) described in Equations 9,10,11, and 12 to
identify how the dimensions of the neural population signals were composed of information related to the item and location (Exp. 1),
item and scene (Exp. 2), delay and magnitude (Exp. 3), and probability and magnitude (Exp. 4) as aggregated properties of individual
neural activity. In this step, an encoding model is constructed in which the regression coefficients are explained by a temporal struc-
ture in the neural modulation of two categorical variables (Exps. 1 and 2), or two continuous variables (Exps. 3 and 4) at the population
level. Our procedures involve targeted dimensionality reduction using the regression subspace
19
and are aimed at describing neural
modulation dynamics.
29
Principal component analysis
We used PCA to identify the dimensions of the neural population signal in orthogonal spaces composed of two variables in each neu-
ral population of the four experiments. For each neural population, we first prepared a two-dimensional data matrix Xof size N
(n)
3M
(C3T)
. The regression coefficient vectors b
1(i,t)
and b
2(i,t)
in Equations 9,10,11, and 12, whose rows correspond to the total number
of neurons (n) in each neural population and columns correspond to C, the total number of conditions (that is, 10: six items and four
locations in Exp. 1, 10: eight items and two scenes in Exp. 2, 2: delay and magnitude in Exp. 3, and 2: probability and magnitude in
Exp. 4), and T is the total number of analysis windows (i.e., 0.6 s divided by the window size bin, 0.05 s, 12 bin). A series of eigen-
vectors was obtained by applying PCA once to the data matrix Xin each neural population. The PCs of this data matrix are vectors
v
(a)
of length N
(n)
and the total number of recorded neurons if M
(C3T)
>N
(n)
; otherwise, the length is M
(C3T)
. PCs were indexed from the
principal components and explained the most to least variance. The eigenvectors were obtained using the prcomp () function in R
software. We did not include the intercept term b
0(i,t)
to focus on the neural modulation by the variables of interest.
Eigenvectors
When we applied PCA to data matrix X, we decomposed the matrix into eigenvectors and eigenvalues. Each eigenvector had a cor-
responding eigenvalue. In our analysis, the eigenvectors at time trepresented a vector, for example, in the space of delay and magni-
tude in Exp. 3. The eigenvalues at time tfor the delay and magnitude were scalars, indicating the extent of variance in the data in that
vector. Thus, the first PC was the eigenvector with the highest eigenvalue. We analyzed the eigenvectors for the top two PCs (PC1
and PC2) in the following analyses to describe the geometry in the most predominant dimension. PCA was applied once to each neu-
ral population; thus, the total variance contained in the data differed among the neural populations.
Shuffle control for PCA
To examine the significance of the population structures described by PCA, we performed three shuffle controls. The two-dimen-
sional data matrix Xwas randomized by shuffling in three ways. In shuffled control 1, matrix Xwas shuffled by permutating the allo-
cation of neuron nat time i. This shuffle provided a data matrix Xof size N
(n)
3M
(C3T)
, eliminating the temporal structure of neural
modulation by condition Cin each neuron but retaining the neural modulations at time tat the population level. In shuffled control
2, matrix Xwas shuffled by permutating the allocation of time iin each neuron n. This shuffle provided a data matrix Xof size
N
(neuron)
3M
(C3T)
, eliminating the neural modulation structure under condition Cmaintained in each neuron but retaining the neural
modulation in each neuron at the population level. In shuffled control 3, matrix Xwas shuffled by permutating the allocation of both
time iand neuron n. In these three shuffle controls, matrix Xwas estimated to be 1,000 times. PCA performance was evaluated by
constructing the distributions of the explained variances for PC1 to PC12. The statistical significance of the variances explained by
PC1 and PC2 was estimated based on the 95th percentile of the reconstructed distributions of the explained variance or bootstrap
standard errors (i.e., standard deviation of the reconstructed distribution). We note that because the significant dimensions of neural
populations dynamics differed the 10 neural populations, we analyzed the neural dynamics at the top two dimension, PC1 and 2.
Geometric patterns of the neural dynamics
We detected roughly three different types of neural geometry: rotational, curvy, and straight dynamics (Figure S4), as interpreted
based on the patterns observed in our analysis and inspired by concepts in prior studies of neural dynamics. Rotational dynamics
was like circle, and curvy one was like half cycle, and straight dynamics was like line. We did not include stable dynamics, in which
trajectory geometry stay a fixed point.
Analysis of eigenvectors and trajectory types
We evaluated the characteristics of the eigenvectors for PC1 and PC2 (i.e., geometry) in each neural population in terms of vector
angle, size, and deviation. The eigenvectors were evaluated for each of the task parameters described above: item and location
e6 iScience 28, 111936, March 21, 2025
iScienc
e
Article
ll
OPEN ACCESS
in Exp. 1, item and scene in Exp. 2, delay and magnitude in Exp. 3, and probability and magnitude in Exp. 4. The angle is the vector
angle from the horizontal axis from 0to 360against the main PCs. The size is the length of the eigenvector. The deviation is the
difference between the vectors (i.e., distance between two vectors). The deviation from the mean vector for each neural population
was estimated. These three eigenvector characteristics were compared among the populations at P< 0.05, using the Kruskal–Wallis
test and Wilcoxon rank-sum test with Bonferroni correction for multiple comparisons as the basic analyses.
29
The vector during the
first 0.1 s was extracted from these basic analyses.
To evaluate the trajectory geometry using their selected feature, we estimated the accumulated angle difference weighted by the
deviance:
X
t=E
t=S
dq(Equation 13)
where the dis deviation between the vectors at times t and t+1, qis the angle difference between vectors at times t and t+1, Sis zero,
and E is the time period to stop the estimation, i.e., 0.6 s. This index is analogous to the rotational force accumulated over time. If the
value of the accumulated angle difference was close to zero, the population geometry was stable, such as a straight or non-dynamic
structure, that is, it remained at some point in the PC1-2 plane. In addition to Sdq,d(distance between vectors), rotational speed Sq/
0.1s, and d
s-e
, such as start to end distance were estimated.
To evaluate the trajectory geometry without the selected features, we used the Lissajous curve function, which describes any geo-
metric pattern in a plane using F(x,y):
x=Ax cosðuxtðiÞ+FxÞ+bx (Equation 14)
y=Ay cosðuytðiÞ+FyÞ+by (Equation 15)
where uand Frepresent cycle of the rotation and their deviance as a function of time, t(i). Ax and Ay represent the amplitudes of the
trajectory, whereas bx and by represent the location of the trajectory. For u, 3.33 pindicates that one cycle since the analysis window
is 0.6 s. Fis 0 to 2 pfor one cycle. We estimated ux, Fx,bx, uy, and Fy, by parameters by estimating maximum loglikelihood of the
model. Nonlinear least squares in the nls() function in the R program was used. A time series of eigenvectors for PC1 and PC2 in a
0.05 s analysis windows (12 data points) were used with a sliding average between three time points (hence, 0.15 s time resolution).
Bootstrap resampling and clustering using feature-based parameters
We estimated Sdq, mean d, rotational speed Sq/0.1s, and d
s-e
, such as start to end distance using a parametric bootstrap resampling
method.
36
In each neural population, the neurons were randomly resampled in duplicate, and a data matrix Xof size N
(neuron)
3M
(C3T)
was obtained. PCA was applied to the data matrix X. Note that the Xis regression matrix. The time series of eigenvectors was ob-
tained, and these four features were estimated from the neural trajectory. This resampling was conducted 1,000 times in each neural
population, and the distributions of these four parameters were obtained.
Following the bootstrap resampling, we applied clustering of these parameters based on PCA and a dendrogram across the rep-
licates in the 10 brain regions, such as 20,000 replicates (10 brain regions times two conditions times 1,000 replicates). Based on this
clustering, proportion of the identified clusters in each brain region was estimated. We note that bootstrap resampling and clustering
across all the 10 brain region’s replicates allow us to identify how strongly a particular neural population geometry was observed in
each of the 10 neural populations.
Bootstrap resampling and clustering based on Lissajous curve parameters
The Lissajous curve parameters for the replicated trajectory were estimated using a bootstrap resampling method.
36
In each neural
population, the neurons were randomly resampled in duplicate, and a data matrix Xof size N
(neuron)
3M
(C3T)
was obtained. Note that
the Xis regression matrix. PCA was applied to the data matrix X. The time series of eigenvectors were obtained for PC1 and PC2,
which describe the trajectory. The fitted parameters using the Lissajous curve function were estimated using the nls() function in R
program. This resampling was conducted 1,000 times in each neural population, and the distributions of the Lissajous parameters
were obtained.
Following the bootstrap resampling, we applied clustering of these parameters based on PCA and a dendrogram across the rep-
licates in the 10 brain regions, such as 20,000 replicates (10 brain regions times two conditions times 1,000 replicates). In this pro-
cess, the omega ratio (ux/uy) and phi difference (Fx-Fy) were also used, in addition to the ux,uy,Fx, and Fy. Based on this clus-
tering, proportion of the identified clusters in each brain region was estimated. We used the median of the estimated parameters in a
cluster to describe the trajectory geometries. We note that bootstrap resampling and clustering across all the 10 brain region’s rep-
licates allow us to identify how strongly a particular neural population geometry was observed in each of the 10 neural populations.
QUANTIFICATION AND STATISTICAL ANALYSIS
Unless otherwise indicated, all data were presented as means with a distribution (Boxplot or data plot). The statistical analyses per-
formed were indicated in the main text and detailed in STAR Methods. The statistical comparisons were made in R software,
including shuffle control and bootstrap resampling.
iScience 28, 111936, March 21, 2025 e7
iScienc
e
Article
ll
OPEN ACCESS