ArticlePDF Available

Decoding the contents and strength of imagery before volitional engagement

Authors:

Abstract and Figures

Is it possible to predict the freely chosen content of voluntary imagery from prior neural signals? Here we show that the content and strength of future voluntary imagery can be decoded from activity patterns in visual and frontal areas well before participants engage in voluntary imagery. Participants freely chose which of two images to imagine. Using functional magnetic resonance (fMRI) and multi-voxel pattern analysis, we decoded imagery content as far as 11 seconds before the voluntary decision, in visual, frontal and subcortical areas. Decoding in visual areas in addition to perception-imagery generalization suggested that predictive patterns correspond to visual representations. Importantly, activity patterns in the primary visual cortex (V1) from before the decision, predicted future imagery vividness. Our results suggest that the contents and strength of mental imagery are influenced by sensory-like neural representations that emerge spontaneously before volition.
Searchlight decoding of the contents of imagery. Using searchlight decoding, we investigated which regions contained information about the contents of mental imagery (see Materials and Methods for details). We defined these regions as those showing above chance accuracy at any point in time (Gaussian random field correction for multiple comparisons, see Materials and Methods for details). We found 4 such regions (central panels): occipital (O), frontal (F), thalamus (T) and pons (P). Then, we investigated the temporal dynamics of each one of these regions (lateral plots), from −13 to +13 seconds from the voluntary imagery onset (time = 0). We decoded imagery contents using the information from imagery runs (imagery, black line) and using information from perception (perception-imagery generalization, grey line). For the imagery decoding (black line), all four regions showed significant above-chance accuracy both before and after imagery onset, indicating that information from imagery was predictive of the chosen grating before (up to −11 seconds) and after the imagery onset. On the other hand, the perception-imagery generalization (grey line) showed significant above-chance decoding before the onset of imagery only in occipital and frontal areas, indicating that perceptual-like information was predictive of the chosen grating before the imagery onset only in cortical areas and after the imagery onset in both cortical and subcortical areas. Numbers on upper-right slices’ corners indicate MNI coordinates. Error bars represent SEM across participants. Full circles represent above chance decoding (p < 0.05, one-sample t-test against chance: 50%). White points inside full circles represent time courses where the number of significant points was significantly above chance level after correction for family-wise error rate (p < 0.05, permutation test, see Fig. S3 for details).
… 
This content is subject to copyright. Terms and conditions apply.
1
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
Decoding the contents and
strength of imagery before
volitional engagement
Roger Koenig-Robert & Joel Pearson
Is it possible to predict the freely chosen content of voluntary imagery from prior neural signals? Here
we show that the content and strength of future voluntary imagery can be decoded from activity
patterns in visual and frontal areas well before participants engage in voluntary imagery. Participants
freely chose which of two images to imagine. Using functional magnetic resonance (fMRI) and multi-
voxel pattern analysis, we decoded imagery content as far as 11 seconds before the voluntary decision,
in visual, frontal and subcortical areas. Decoding in visual areas in addition to perception-imagery
generalization suggested that predictive patterns correspond to visual representations. Importantly,
activity patterns in the primary visual cortex (V1) from before the decision, predicted future imagery
vividness. Our results suggest that the contents and strength of mental imagery are inuenced by
sensory-like neural representations that emerge spontaneously before volition.
A large amount of psychology and, more recently, neuroscience has been dedicated to examining the origins,
dynamics and categories of thoughts13. Sometimes, thoughts feel spontaneous and even surprising; while other
times they feel eortful, controlled and goal oriented. When we decide to think about something, how much of
that thought is biased by pre-existent neural activity? Mental imagery, a sensory thought, can be triggered volun-
tarily or involuntarily4. However, how much of the content and strength of our mental images we actually control
when we voluntarily generate imagery remains unknown. For example, individuals with post-traumatic stress dis-
order (PTSD) report a complete lack of control of both the content and strength of their mental imagery5. While
evidence suggests that imagery strength varies both between and within individuals in the normal population5,6.
Previous research has shown that prefrontal activity can predict future decisions710, and nonconscious sensory
activity11, and that mental images can be decoded from early visual cortex12,13. However, it remains unknown
whether nonconscious sensory activity inuences what we think and how strongly we think it.
To investigate the origins of the content and strength of voluntary imagery, we craed a thought-based men-
tal imagery decision task, in which individuals could freely decide what to imagine, while we recorded brain
activation using functional magnetic resonance imaging (fMRI). We used multi-voxel pattern analysis (MVPA,
see Materials and Methods for details) to decode information contained in spatial patterns of brain activation
recorded using fMRI1416. Additionally, in an independent control experiment, we estimated the temporal relia-
bility of the reported onset of thoughts, as it has been criticized in previous paradigms17. Using a design exploiting
the known eect of imagery priming on subsequent binocular rivalry as a function of time18, we show that partic-
ipants’ reports of thoughtonsets were indeed reliable within the temporal resolution of fMRI.
Models of determinants of decision making postulate that executive areas in the prefrontal cortex would trig-
ger selection processes leading to future choices9,10,19. In addition to the executive areas involvement in future
visual thoughts, we aimed to test whether predictive information could also be decoded from visual areas, as
previous results have shown that visual imagery recruits visual areas12,13. To test this, we used both searchlight
and visual (from V1 to V4) regions-of-interest (ROI) decoding. We also sought to determine the representational
content of the predictive signals: is predictive information, to some extent, similar to perceptual visual representa-
tions? To assess this, we perceptually presented gratings outside of attention to participants in separate runs.
Functional brain images from the perceptual blocks were then used to train classiers, which were subsequently
tested on imagery blocks both before and aer the decision. is so called perception-imagery generalization
cross decoding was thus used to show common informational content between visual perceptual representations
and predictive signals. Finally, we tested whether the subjective strength of visual imagery could be decoded from
School of Psychology, The University of New South Wales, Sydney, Australia. Correspondence and requests for
materials should be addressed to R.K.-R. (email: rogkoenig@gmail.com)
Received: 6 August 2018
Accepted: 7 January 2019
Published: xx xx xxxx
OPEN
Content courtesy of Springer Nature, terms of use apply. Rights reserved
2
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
information in visual areas before reported volition. Such an involvement of visual areas in the future strength of
visual imagery would provide further evidence that sensory areas also play an important role in the phenomenol-
ogy of future thoughts.
Using this paradigm, we found that activity patterns were predictive of mental imagery content as far back as
11 seconds before the voluntary decision of what to imagine –in visual, frontal and subcortical areas. Importantly,
predictive patterns in the primary visual cortex (V1) and the lateral prefrontal cortex were similar to perceptual
representations elicited by unattended images. We show that the subjective strength (vividness) of future mental
imagery can be predicted from activation patterns contained in the primary visual cortex (V1) before a decision is
made. Our results suggest that the contents and strength of mental imagery are inuenced by sensory-like neural
representations that emerge spontaneously before volition. ese results are important as they point to a role of
visual areas in the pre-volitional processes leading to visual thought production, thus shedding light on the mech-
anisms of intrusive mental imagery in conditions such as PTSD, as well as the origins of normal mental imagery.
Results
Free decision visual imagery paradigm. Our paradigm consisted of a mental decision leading to the
formation of a visual mental image. In every trial, participants had to choose to imagine one of two possible
dierent colored and oriented gratings while we recorded brain blood-oxygen-level dependent (BOLD) using
fMRI (Fig.1, see Materials and Methods for details). Aer the start of the trial, participants had a maximum of
20 seconds to freely decide which pattern to think of. As soon as they felt they had made the decision, they pressed
a button (always the same button for both gratings) with the right hand, thus starting 10 seconds of imagery gen-
eration. During this time, participants imagined the chosen grating as vividly as they could. Subsequently, they
were prompted with two questions: “what did you imagine?” and “how vivid was it, to which they answered by
pressing dierent buttons (Fig.1). On average, participants took 5.48 s (±0.15 SEM) to decide which grating to
imagine, while the average trial time was 31.18 s (see Fig.S1 and Materials and Methods for details). Each trial
included a blank period of 10 s at the end to avoid spillover eects from one trial to the next20,21. Participants chose
Figure 1. fMRI task paradigm. Participants had to freely choose between two predened gratings (horizontal
green/vertical red or vertical green/horizontal red, counterbalanced across participants). Each trial started with
the prompt: “take your time to choose – press right button” for 2 seconds. While the decision was made, a screen
containing a xation point inside a rectangle was shown. is period is referred as “pre-imagery time” and was
limited to 20 seconds. Participants were instructed to press a button with the right hand as soon they decided
which grating to imagine (always the same button independently of the chosen grating). During the imagery
period (10 seconds), participants imagined the chosen grating as vividly as possible. At the end of the imagery
period, a question appeared on the screen: “what did you imagine? – Le for vertical green/red – Right for
horizontal red/green” (depending on the pre-assigned gratings for the participant). Aer pressing the relevant
button to answer, a second question appeared: “how vivid was it? –1 (low) to 4 (high)”, to which participants
answered using one of 4 buttons. Aer each trial there was a blank interval of 10 seconds where we instructed
the participants to relax and not to think about the gratings nor subsequent decisions. Gray hand drawings
represent multiple possible button responses, while black drawing represents a unique button choice.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
3
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
to imagine each grating with similar probabilities (50.44% versus 49.56% for vertical and horizontal respectively,
Shannon entropy = 0.997, with a switch probability of 58.59% ±2.81 SEM, see Materials and Methods for detailed
behavioral results).
Decoding sanity checks. We rst veried the suitability of our decoding approach to classify the contents
of visual perception and imagery. We used SVM classiers trained and tested (in a cross-validation scheme)
on 10 s of perception or imagery data and classied the perceptual or imagined stimuli (red/green horizontal/
vertical gratings) in visual areas from V1 to V4 (see Materials and Methods for details). Fig.S2 shows the results
of this sanity check. We found above chance decoding accuracy for perception (91.7, 91.7, 91.7 and 71.4%;
one-tailed t-test p = 3.1·108, 1.2·109, 7·1011 and 1.5·103; from V1 to V4) and imagery (66.9, 67, 69.1 and
63.7%; p = 8·104, 1.2·103, 1·104 and 8·103). ese results are comparable to previous results on decoding
perception and imagery2224 and thus validate our decoding approach.
Searchlight decoding results. To investigate which brain areas contained information about the con-
tents of imagery, we employed a searchlight decoding analysis on fMRI data from the whole brain16. We used
two sources of information to decode the contents of imagery: neural activation patterns within the imagery
condition (imagery decoding) and patterns from unattended perceptual stimuli to decode imagery data
(perception-imagery generalization cross-decoding). For the imagery decoding, we trained and tested classi-
ers using the imagery data. In the imagery-perception generalization analysis we trained the classiers using
data from the perception scans and tested on imagery data. e latter allowed us to explore shared information
between perception and imagery, without the eects of attention (see Materials and Methods for details & behav-
ioral attention task during perception).
We dened the areas bearing information about the contents of imagery as those revealing above chance
decoding accuracy at any point in time during a 28 s time window around the decision (cluster denition thresh-
old p < 0.001, cluster threshold p < 0.05, see Materials and Methods for details). Under this selection criterion,
above chance decoding at any point in time is trivial and not relevant for our question. Rather, the purpose of
this analysis is investigating the temporal dynamics of the imagery-content information. Specically, we were
interested to test whether any area contained information about the contents of imagery before the decision. In
this respect, our analysis is bias-free regarding the temporal position of the information, as we considered many
time-points before and aer the decision (7 points each).
Using the above explained analysis, we found a network of four areas: frontal, occipital, thalamus and
pons (Fig.2, central panels, see TableS1 for cluster locations in MNI coordinates). We then examined the
information-content time course in these areas from 13 to +13 seconds from the reported imagery decision
(time = 0). As expected, time-resolved (2 s) decoding yielded lower (but statistically signicant) accuracies than
averaging over longer periods (see Fig.S2 for comparison), presumably due to its lower signal-to-noise ratio.
Importantly, in the context of neuroscience research, decoding accuracy scores are not the most relevant output
of classication, but rather their statistical signicance is25. Time-resolved classication in the imagery condition
reached above chance decoding accuracy up to 11 seconds before reported imagery onset in occipital and thala-
mus while signicant classication was reached at 9 seconds in the pons (Fig.2; black solid points with inner
white circle, p < 0.05, one-sample, one-tailed t-test, controlled for FWER p < 0.05, permutation test, see methods
for details).
e perception-imagery generalization decoding showed signicant above chance accuracy as early as 9 sec-
onds before the onset of imagery in occipital areas (although these results did not survived the control for FWER)
and 3 seconds in frontal areas (Fig.2; grey solid points with inner white circle, p < 0.05, one-sample, one-tailed
t-test, controlled for FWER p = 0.003, permutation test), indicating that pre-volitional predictive information
shares properties with perception in frontal areas. In subcortical areas, above-chance generalization decoding
accuracy was only observed aer the onset of imagery (+1 and +11 seconds in the thalamus and the pons respec-
tively) and was not signicant aer controlling for FWER. Importantly, during the perceptual scans visual atten-
tion was diverted by a demanding xation task (see Materials and Methods), hence such generalization should
not be due to high-level volitional or attentional mechanisms. Interestingly, decoding accuracy in occipital areas
during the imagery period was lower than expected (see for example26). Previous studies have shown that prior
decisions can impair subsequent cognitive tasks27. erefore, the cognitive load for the decision element of our
task could impair imagery, which is consistent with the results of a behavioral control experiment showing that
cued imagery (no-decision) was stronger than decision followed by imagery (Fig.3B,C).
Behavioral imagery onset reliability experiment. We ran an independent behavioral experiment out-
side the scanner to test whether participants might have begun imagining before they reported having done so,
which could explain early above chance classication. We utilized a method that exploits binocular rivalry to
objectively measure imagery strength18,28 as a function of time in a free decision and a cued condition (Fig.3).
We reasoned that if participants were reporting the onset of imagery a few seconds late, this would be detected as
an increase in rivalry ‘priming’ compared to a condition where the onset of imagery is controlled by the exper-
imenter, as such priming is known to be dependent on time18. Figure3B shows the eects of imagery time on
sensory priming for both conditions. Imagery time showed a signicant eect on priming for free decision and
cued conditions (ANOVA, F = 7.15, p = 0.002, Fig.3B), thus conrming the eect of imagery time on priming.
Priming for the free decision condition was signicantly lower than in the cued condition (ANOVA, F = 5.77,
p = 0.021), indicating that participants did not start imagining before they reported doing so (which would have
resulted in the opposite pattern) and also suggesting that sensory priming is somehow disrupted by the decision
task, perhaps due to cognitive load, analogous to what has been shown in other cognitive tasks27. Importantly,
signicant dierences in priming between 3.33 and 6.67 seconds of imagery time were found for the free decision
Content courtesy of Springer Nature, terms of use apply. Rights reserved
4
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
and cued conditions (one-tailed t-test, p < 0.05), indicating that this behavioral task can resolve dierences in
priming spaced by 3.33 seconds, at least for these two rst time points, thus providing a lower bound of temporal
resolution of the accuracy of the reported imagery onset which is comparable to that of fMRI.
Figure3C shows the eects of imagery time on subjective imagery vividness. Imagery time showed also a
signicant eect on vividness for free decision and cued conditions (ANOVA, F = 18.49, p < 105, Fig.3C).
However, differences between free decision and cued conditions were not significant (ANOVA, F = 2.42,
p = 0.127). Again, signicant dierences in vividness between 3.33 and 6.67 seconds of imagery time were found
for the free decision and cued conditions (one-tailed t-test, p < 0.01). While a similar pattern of results could
arguably be explained by subjects starting to imagine the opposite target before they reported it, or imagining the
two possible targets alternatively, these outcomes are not consistent with our fMRI results. is control largely
overcomes one of the major limitations to prior free-choice paradigms, as it enables us to measure precision of
thought-choice reporting17.
Searchlight decoding control analyses. We employed a permutation test to check whether the decod-
ing distributions contained any bias, in which case above chance decoding would be overestimated and the use
of standard parametric statistical tests would be invalid29 (see Materials and Methods for details). Permutation
tests yielded similar results to those using parametric tests (Fig.S4), and, importantly, decoding accuracy distri-
butions under the null hypothesis showed no bias, thus validating the use of standard parametric statistical tests
(TableS2).
We also conducted a control analysis to test whether the searchlight results could be explained by any spillover
from the previous trial. We trained the classiers on the previous trial (N-1 training) and tested on the subsequent
Figure 2. Searchlight decoding of the contents of imagery. Using searchlight decoding, we investigated which
regions contained information about the contents of mental imagery (see Materials and Methods for details).
We dened these regions as those showing above chance accuracy at any point in time (Gaussian random eld
correction for multiple comparisons, see Materials and Methods for details). We found 4 such regions (central
panels): occipital (O), frontal (F), thalamus (T) and pons (P). en, we investigated the temporal dynamics of
each one of these regions (lateral plots), from 13 to +13 seconds from the voluntary imagery onset (time = 0).
We decoded imagery contents using the information from imagery runs (imagery, black line) and using
information from perception (perception-imagery generalization, grey line). For the imagery decoding (black
line), all four regions showed signicant above-chance accuracy both before and aer imagery onset, indicating
that information from imagery was predictive of the chosen grating before (up to 11 seconds) and aer the
imagery onset. On the other hand, the perception-imagery generalization (grey line) showed signicant above-
chance decoding before the onset of imagery only in occipital and frontal areas, indicating that perceptual-like
information was predictive of the chosen grating before the imagery onset only in cortical areas and aer
the imagery onset in both cortical and subcortical areas. Numbers on upper-right slices’ corners indicate
MNI coordinates. Error bars represent SEM across participants. Full circles represent above chance decoding
(p < 0.05, one-sample t-test against chance: 50%). White points inside full circles represent time courses where
the number of signicant points was signicantly above chance level aer correction for family-wise error rate
(p < 0.05, permutation test, see Fig.S3 for details).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
5
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
trial (trial N). If there was spill over from the previous trial, this analysis should show similar or higher decoding
accuracy in the pre-imagery period. We found no signicant above chance classication for any of the regions,
thus ruling out the possibility that these results are explained by any spill over (Fig.S5).
Visual regions-of-interest (ROI) decoding. Results from the searchlight analysis were inconclusive
regarding whether the predictive information before the decision share similarities with visual perception, as
only frontal areas exhibited robust perception-imagery generalization decoding (Fig.2). To test whether pre-
dictive information can be found in visual areas, we conducted a time-resolved decoding analysis only in visual
regions-of-interest (ROI) from V1 to V4 defined by an independent functional experiment (see Materials
and Methods for details). We reasoned that if we find information that predicts the imagery decision in
perception-devoted visual areas this would be a strong argument in favor of perceptual predictive information.
e imagery ROI decoding analysis revealed similar temporal dynamics to the searchlight approach, show-
ing earliest above-chance decoding accuracy 11 seconds from the reported imagery decision in the primary
visual cortex, V1 (Fig.4A). In the imagery decoding, all visual ROIs showed above chance decoding accuracy
before imagery onset at dierent time points (small points, Fig.4A, p < 0.05, one-sample, one-tailed t-test against
chance: 50%), however only V1 and V4 were consistent across time points (from 11 to 5 and to 5 to 15 sec-
onds, Fig.4A outline circles, p < 0.05, one-sample, one-tailed t-test, controlled for FWER p < 0.05, permutation
test). e early (11s) predictive information in primary visual cortex suggest that predictive signals would cor-
respond, at least partly, to visual representations.
e perception-imagery generalization showed more modest eects with above chance decoding accuracy
in V3 just 3 s before imagery onset (Fig.4B outline circles, p < 0.05, one-sample t-test against chance: 50%,
Figure 3. Behavioral experiment: testing the accuracy of imagery onset reports. We tested perceptual priming
and subjective imagery vividness a function of imagery time as a means to verify the accuracy of reporting
the imagery onset. (A) Paradigm. Free decision and cued trials were pseudo-randomized. Perceptual priming
was measured as a function of imagery time (3.3, 6.7 and 10 s), as the dominance bias on binocular rivalry. (B)
Perceptual priming. Imagery time signicantly increased perceptual priming on the free decision and cued
conditions (ANOVA, F = 7.15, p = 0.002), and priming in the free decision condition was signicantly lower
than in the cued condition (ANOVA, F = 5.77, p = 0.021), thus ruling out that participants were reporting
the imagery onset aer starting imagining. (C) Imagery vividness. Imagery time also signicantly increased
subjective imagery vividness on the free decision and cued conditions (ANOVA, F = 18.49, p < 105). Stars
show signicant dierences between the rst two time points, thus setting a lower bound of temporal resolution
on this behavioral task. ese results show that the reported onset of imagery is reliable relative to the temporal
resolution of fMRI. Error bars show ±SEM. Black and gray lines present free and cued conditions, * and **
represent p < 0.05 and p < 0.01, two-sample t-test.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
6
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
controlled for FWER p < 0.003, permutation test). e overall low perception-imagery generalization decoding
accuracy aer imagery onset suggests that the analysis might not be eectively capturing the representational
commonalities between perception and imagery as reported previously30,31. is discrepancy with previous
results could be due to experimental noise or to a lack of representational similarity between perception and
imagery. To distinguish between these two alternatives, we performed a new analysis seeking more sensitivity by
abandoning the time-resolved analysis as described in the next section.
Predictive information in visual areas shares properties with perceptual information. While
imagery decoding in visual areas suggests that predictive information is perceptual in nature, it does not rule
out other possibilities, such as attentional eects. In particular, the time-resolved generalization analysis failed
at showing strong decoding before the decision in visual areas (Fig.4A). is can be due to a number of factors
such as dierences in the neural representations between imagery and perception, as well as the dierences in
signal-to-noise ratio between these conditions, which could lead to poor classication performance. We thus
tested whether abandoning the time-resolved analysis would produce more conclusive results by increasing the
signal-to-noise ratio. To achieve more sensitivity, we trained classiers on perception runs and testedthem on
theimagery before-decision period (10 to 0 s) and the aer-decision period (0 to 10 s), thus eectively pooling
the data across time points for the imagery condition as opposed to analyzing each point separately (as it was
done in the time-resolved analysis, Fig.4B). is analysis showed modest but signicant decoding before the
decision in V1, and aer the decision in V3 (Fig.5, solid points, p < 0.05, one-sample, one-tailed t-test against
chance: 50%). is result thus supports the idea that predictive information is at least partly perceptual in nature
and that the predictive perceptual representations would be housed in the primary visual cortex.
Visual areas ROI decoding control analyses. We also conducted a number of control tests on the ROI
decoding results to ascertain the validity of our results. Permutation tests on the ROI decoding yielded similar
results (Fig.S6). We controlled for whether these results could be accounted by any spill over from previous trials
by again conducting an N-1 analysis. is analysis did not show any above chance accuracy before the imagery
onset, but we found a signicant time point at t = 5 s in V4 for the imagery condition (Fig.S7).
Imagery decoding as a function of reported vividness. Next, we investigated the eect of subjective
imagery vividness on decoding accuracy for imagery. We divided the trials into low- and high-vividness (mean
split, see Materials and Methods for details). As expected, the decoding accuracy for imagery content was higher
in high-vividness trials, but surprisingly, the strongest dierences were observed before the onset of imagery
(Fig.S8A). e generalization analysis showed the same trend. We found above chance decoding only in high viv-
idness trials (Fig.S8B), suggesting that in more vivid imagery trials, shared representations between perception
and imagery would emerge more readily before volition. is result suggests that the subjective strength of future
imagery is associated with better predictive power in visual areas.
Decoding future imagery vividness from pre-imagery responses. Finally, we reasoned that if prior,
pre-imagery sensory representations in early visual cortex do indeed dictate the strength of subsequent visual
imagery, then the pre-imagery data should predict the reported vividness from the subsequent imagery period.
Accordingly, we tested exactly this, we attempted to decode the subjective strength of imagery (i.e. vividness) by
Figure 4. Decoding the contents of imagery in visual regions-of-interest (ROI). We examined the contents
of imagery in visual areas using a ROI approach. Visual areas from V1 to V4 were functionally dened and
restricted to the foveal representation (see Materials and Methods for details). (A) Imagery decoding. We found
above-chance decoding accuracy for imagery decoding both before (from 11 seconds) and aer imagery
onset. Dierent visual ROI showed signicant above-chance decoding accuracy at dierent time points, while
V1 ROI was the most consistent across time points. (B) Perception-imagery generalization. e cross-decoding
generalization analysis showed consistent above chance decoding accuracy only in V3. Error bars represent
SEM across participants. Full points represent above chance decoding (p < 0.05, one-sample t-test against
chance: 50%). Outline circles represent time courses where the number of signicant points was signicantly
above chance level aer correction for family-wise error rate (p < 0.05, permutation test, see Fig.S3 for details).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
7
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
using only the fMRI data from before the imagery period (Fig.6). Decoding accuracy was signicantly above
chance in V1 (62.2%, p = 0.0035, one-sample, one-tailed t-test against chance: 50%), but not in other visual ROIs
(p > 0.05, Fig.6), indicating that information contained in V1 predicted future subjective imagery strength. is
result shows that the predictive information in primary visual cortex not only has an inuence on the contents of
future imagery, but also impacts the subjective quality of the future visual thought.
Discussion
We found that neural activation patterns were predictive of the contents of voluntary visual imagery as far as 11
seconds before the choice of what to imagine. ese results suggest that the contents of future visual imagery can
be biased by current or prior neural representations.
While previous interpretations have assigned predictive signals an unconscious origin7,9,32,33, we remain
agnostic as to whether predictive signals were accompanied by awareness or not. We acknowledge the inherent
limitations of most paradigms at capturing the state of awareness of the participants before their decision (see for
example17). We have nonetheless gone to great lengths to overcome these limitations by developing a behavioral
test aimed at probing the accuracy of imagery onset reports (Fig.3). While this independent experiment sug-
gested that participants were not imagining the gratings before the reported onset, the experiment does not com-
pletely exclude the possibility that participants engaged in imagery before the reported onset while in the scanner.
Our results show predictive patterns in occipital, frontal and subcortical areas (Fig.2). While previous results
highlight the role of frontal areas carrying information about subsequent decisions7,9,10; to the best of our knowl-
edge, predictive signals in visual and subcortical areas have not been reported. Interestingly, recent results have
Figure 5. Predictive signals in visual areas share information with perceptual representations before and aer
the decision. In order to test whether predictive information in visual areas shared properties with perceptual
representations in visual areas, we conducted generalization decoding by training on perception and testing on
imagery data, during a period before (10 to 0 s) and aer the decision (0 to 10 s). Error bars represent SEM
across participants. Signicant decoding was found in V1 and V3 before and aer the decision, respectively
(p < 0.05, one-sample t-test against chance: 50%).
Figure 6. Pre-imagery activation patterns in the primary visual cortex (V1) predict the strength of subsequent
visual thoughts. We used pre-imagery data (from 10 to 0 s from the voluntary imagery onset) to decode
subsequent imagery vividness (high vs low, see text for details). Information in V1 from before the imagery
decision predicted how vivid the future imagery will be. Error bars represent SEM across participants. Full point
represents above chance decoding (p = 0.0035, one-sample t-test against chance: 50%).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
8
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
shown that brainstem centers can be a source of variability in the form of biases in perceptual decisions due to
arousal mechanisms34. A similar mechanism could support the involvement of subcortical regions in the bias of
future visual imagery.
Predictive signals in visual areas have perceptual properties. Are the predictive signals low-level
visual representations or more abstract signals? Our results suggest that the predictive signals share properties
with perceptual representations. Two pieces of evidence support this interpretation. First, predictive signals were
found in visual areas V1, V3 and V4, which are devoted to visual processing, thus suggesting that predictive
information has perceptual properties (Fig.4). Secondly, and more conclusively, using brain signals elicited by
unattended perceptual gratings, we were able to classify the contents of imagery before the decision (Fig.5). e
result of this generalization decoding analysis shows that predictive information in V1 shares similarities with
perception, thus suggesting that these signals correspond, at least partly, to visual representations.
As for the specic features coded by the predictive sensory-like representations, it is unclear whether they
correspond to orientation, color or to the conjunction of both. is question can however be answered by future
experiments on perception-imagery generalization cross-decoding by using perceptual stimuli in the form of
greyscale oriented gratings and solid color patches, while imagining the same colored oriented gratings as in the
current study. Such a design would be able to distinguish the specic feature content of these representations:
dierences in decoding accuracy between color patch-imagery and achromatic gratings-imagery generalization
should shed light on which features are coded by the pre-volitional signals.
Timing of the predictive signals and possible confounds. e nding of predictive signals up to
11 seconds before the decision can seem surprisingly early. However, early predictive signals have been detected
using similar techniques in previous studies on motor decisions (up to 10 and 7 seconds before decision7,9) and
also on abstract decisions (up to 4 seconds10). Crucially, questions have been raised about whether the decoding
of such signals can correspond to neural activity elicited by the preceding trial. e N-1 trial shiing or “spillover”
analysis performed in our study (FigsS5 and S7), is an accepted way to control for this issue19,35. e spillover
analysis tests the hypothesis that if there is a temporal carry-over of information from one trial to the next, predic-
tive signals should be best accounted by shiing the label of the current trial to the trial before (see Materials and
Method for details). Results of the spillover control analysis showed that the predictive signals in our study are
not explained by the previous trial, thus dismissing spillover eects as an explanation of our data (FigsS5 and S7).
Another relevant issue is that the sequential dependencies might have an impact on the classier itself. In
other words, any deviation from randomness in the choice across trials (captured for example by the entropy
value or the probability of switch) could be potentially exploited by the classier. Previous studies have shown
that classiers trained only on behavioral responses can perform as well as or better than classication on neu-
ral responses20,36. While sequential dependencies have been argued to be negligible on previous experiments21,
this issue is dicult to dismiss without independent experiments. While, in our experiment, the probabilities of
choosing vertical or horizontal were very similar (50.44% and 49.56%, Shannon entropy = 0.997) the probability
of switching gratings from one trial to the next deviated from chance (58.59%). erefore, by taking the results
from the imagery decoding alone, we cannot rule out that sequential dependencies could have inuenced the
classication, as the classier would have reached 58.59% decoding accuracy just by predicting that the deci-
sion on next trial would be switched from the previous one. Crucially, our independent perception-imagery
generalization decoding analysis does not suer from sequential dependencies issues as classiers were trained
exclusively on perception trials (presented in a 15s-on/15s-o block design) and tested on imagery trials. e
perception-imagery generalization decoding conrmed predictive signals before the decision from (Figs5 and
S8B), thus indicating that our results are not explained by sequential dependencies in the participants’ choices.
Predictive information in the primary visual cortex (V1) impacts the subjective strength of
future imagery. Interestingly, information contained in the primary visual cortex (V1) predicted the sub-
jective strength of visual imagery (Fig.6). is suggests that the phenomenology of future mental images is
supported by patterns of activations in the primary visual cortex that are present before the onset of voluntary
imagery. is result again, links information contained in visual areas with the subjective properties of future
voluntary imagery.
Choice prediction can be explained by decisions relying on spontaneously generated rep-
resentations. In previous experiments applying MVPA to study decision processes, predictive information
about choices has been interpreted as evidence for nonconscious decision making79. us, it could be possible to
interpret our results as the imagery decision being made (at least partly) non-consciously, supporting the idea that
subjective sensation of making the decision emerges aer the decision is already made7,9,32,33.
An alternative hypothesis is that these results reect decisional mechanisms that rely on spontaneously gen-
erated visual representations present before the decision. Since the goal of the task was to randomly choose
and imagine a grating as vividly as possible, one strategy might be to choose the pattern that is spontaneously
more strongly represented. In other words, spontaneous grating representations might stochastically uctuate
in strength while remaining weak compared to voluntary imagery. us, prior to the decision, one grating rep-
resentation might dominate, hence being more prone to decisional thought-selection. An analogous interpreta-
tion has been advanced to explain the buildup of neural activity prior to self-initiated movements, aka readiness
potential37.
Interestingly, it has been recently shown that self-initiated movements can be aborted even aer the onset
of predictive neural signals38, suggesting that the decision can be somewhat dissociated from predictive neural
signals. erefore, our results can be explained by a conscious choice that relies on weak neural representations
Content courtesy of Springer Nature, terms of use apply. Rights reserved
9
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
during the decision production; perhaps analogous to blindsight39, subliminal priming studies40,41 or noncon-
scious decisional accumulation42. Such a mechanism is intriguing in light of theories of mental imagery and
thought generation that propose involuntary thought intrusion as both an everyday event, and, in extreme cases,
a component of mental disorders like PTSD43,44.
In summary, we think that the best way to explain our results is not in terms of unconscious decision processes
(as it has been advanced previously in the literature), but rather by a process in which a decision (which could be
conscious) is informed by weak sensory representations.
Concluding remarks and future directions. Our current study can be seen as the rst to capture the
possible origins and contents of involuntary thoughts and how they progress into or bias subsequent voluntary
imagery. is is compatible with the nding that the most prominent dierences between low and high vividness
trials are seen for the pre-imagery period in visual areas, especially the primary visual cortex, which can be inter-
preted as when one of the patterns is more strongly represented it will induce a more vivid subsequent volitional
mental image. is is in line with reports showing that imagery vividness depends on the relative overlap of the
patterns of activation elicited by visual perception and imagery45. Our results expand that nding by showing that
the vividness of future visual thoughts is predicted by information stored in the primary visual cortex.
It is up to future research to reveal whether representations biasing subsequent voluntary imagery are genu-
inely non-conscious or not. is will not only shed light on age-old questions of volition, but also provide a clear
mechanism for pathological intrusive thoughts common across multiple mental disorders.
Material and Methods
Participants. Experimental procedures were approved by the University of New South Wales Human
Research Ethics Committee (HREC#: HC12030). All methods in this study were performed in accordance with
the guidelines and regulations from the Australian National Statement on Ethical Conduct in Human Research
(https://www.nhmrc.gov.au/guidelines-publications/e72). All participants gave informed consent to participate
in the experiment. For the fMRI experiment, we tested 14 participants (9 females, aged 29.1 ± 1.1 years old,
mean ± SEM). We selected the sample size based on both estimations of eect sizes and the number of partici-
pants used in previous studies employing decoding to track brain signals predictive of subsequent decisions79.
Previous works tested from 8 to 14 participants, we thus used the participant’s number upper bound in order to
maximize the reliability of the results. We performed power analyses to corroborate that this number of partici-
pants was adequate to achieve a power of at least 0.8. Based on eect size estimations using G*Power 346. Soon at
al. study on the pre volitional determinants of decision making9 tested 14 participants achieving a power of 0.812
in the time resolved decoding analysis while Bannert and Bartels study on perception-imagery cross-decoding
generalization tested 830. Post hoc eect size analysis revealed that they would have needed to test 12 participants
to achieve a power of 0.8. For the behavioral free decision and cued imagery priming task, we invited all the pre-
vious 14 participants to be part in this psychophysics experiment. Only 8 participants (4 females, aged 29.3 ± 0.5
years old), were able to come back to complete this new experiment.
fMRI free decision visual imagery task. We instructed participants to choose between two predened
gratings (horizontal green/vertical red or vertical green/horizontal red, counterbalanced across participants),
which were previously familiar to the participants through prior training sessions. We asked the participants
to refrain from following preconceived decision schemes. In the scanner, participants were provided with two
dual-button boxes, one held in each hand. Each trial started with a prompt reading: “take your time to choose – press
right button” for 2 seconds (Fig.1). Aer this, a screen containing a xation point was shown while the decision
as to what to think of was made. is period is referred as “pre-imagery time” and was limited to 20 seconds.
Participants were instructed to settle their mind before deciding. Participants pressed a button with the right
hand as soon as they decided which grating to imagine. Participants reported that in some trials they felt in con-
trol of their decision, whereas in other trials one of the gratings just “popped-out” in their mind. Importantly,
participants were instructed to press the button as soon as possible when they reached the decision or a grating
appeared in their mind. Aer pressing the button, the xation point became brighter for 100 ms indicating the
participants that the imagery onset time was recorded. During the imagery period (10 seconds), participants were
instructed to imagine the chosen pattern as vividly as possible, trying, if possible, to project it onto the screen.
At the end of the imagery period, a question appeared on the screen: “what did you imagine? – Le for vertical
green/red – Right for horizontal red/green” (depending on the pre-assigned patterns for the participant). Aer
giving the answer, a second question appeared: “how vivid was it? – 1 (low) to 4 (high)” to which participants
answered using 4 dierent buttons. Aer each trial, there was a blank interval of 10 seconds where we instructed
the participants to just relax and try not to think about the gratings nor any subsequent decisions. Systematic
post-experiment interviews revealed that some participants (n = 4) could not help thinking about gratings in
some trials during the inter trial interval. ey reported dierent strategies to avoid these thoughts such as ignor-
ing them, replacing them for another image/thought, or choosing the other grating when the decision came. e
remaining participants (n = 10) reported not having any thoughts or mental images about gratings during the
rest period. We tested if the eects we found could be explained by the former group of participants who could
not refrain from thinking about gratings. We thus performed the analysis using only data from the participants
who did not think/imagine gratings outside the imagery period (n = 10). Fig.S10 shows the results of this control.
Results are comparable to those shown in Fig.2, thus ruling out the possibility that that the eects we report were
driven by the 4 participants who had thoughts about gratings in the rest period. We delivered the task in runs
of 5 minutes during which the participants completed as many trials as possible. Participants chose to imagine
horizontal and vertical gratings with a similar probability (50.44% versus 49.56% for vertical and horizontal grat-
ings respectively, mean Shannon entropy = 0.997 ± 0.001 SEM) and showed an average probability of switching
Content courtesy of Springer Nature, terms of use apply. Rights reserved
10
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
gratings from one trial to the next of 58.59% ±2.81 SEM. Participants completed in average 7.07 runs each, with
each run containing an average of 9.2 trials.
Behavioral imagery onset reliability experiment. Since the self-report of the onset of decisions has
been criticized due to its unreliability and unknown variance17, we developed a novel independent psychophysics
experiment to test its reliability. We objectively measured imagery strength as a function of time for a subset of
the participants from the fMRI experiment. Importantly, the results of this experiment revealed that the reported
onsets of decisions are indeed reliable relative to the temporal resolution of the fMRI (Fig.3).
We employed two conditions: free decision (freely chosen imagined stimulus and imagery onset) and cued
(i.e., imposed imagined stimuli and imagery onset), see Fig.3A for a schematic of the paradigm. We used binoc-
ular rivalry priming as a means to objectively measure sensory imagery strength18,47,48. When imagining one of
the two competing rivalry stimuli prior to a binocular rivalry presentation, rivalry perception is biased towards
the imagined stimulus, with greater levels of priming as the imagery time increases18; see18,28 for discussion
of why this is an objective measure of imagery strength and not visual attention, binocular rivalry control or
response bias. We asked participants to imagine one of the rivalry gratings for dierent durations and then meas-
ured rivalry priming as a function of the dierent imagery durations (Fig.3B). We reasoned that if participants
reported the onset of imagery a few seconds aer they actually started imagining, this would be detected as an
increase in priming compared to the condition where the onset of imagery is controlled by the experimenter.
us, in the free decision condition, participants had to freely choose to imagine one of the two predened grat-
ings (horizontal green/vertical red or vertical green/horizontal red, counterbalanced across participants). In the
cued condition, participants were presented with a cue indicating which grating to imagine, thus imposing the
onset of imagery as well as which grating needed to be imagined. Each trial started with the instruction “press
spacebar to start the trial” (Fig.3A). en, either the instruction “CHOOSE” or a cue indicating which grating
to imagine (i.e., “horizontal red”) was presented for 1 second. In the free decision condition, the imagery time
started aer the participant chose the grating to imagine, which they indicated by pressing a key on the computer
keyboard (Fig.3A). For the cued imagery condition, the imagery time started right aer the cue was gone (i.e., no
decision time). We tested 3 imagery times (3.33, 6.67 and 10 seconds). Aer the imagery time, a high pitch sound
was delivered (200 ms) and both gratings were presented through red/green stereo glasses at xation for 700 ms.
en, participants had to report which grating was dominant (i.e., horizontal red, vertical green or mixed if no
grating was dominant), by pressing dierent keys. Aer this, they had to answer which grating they imagined (for
both free decision and cued trials). Participants then rated their imagery vividness from 1 (low) to 4 (high) by
pressing one of the 4 buttons in their response boxes. Free decision and cued trials as well as imagery times were
pseudo-randomized within a block of 30 trials. We added catch trials (20%) in which the gratings were physically
fused and equally dominant to control the reliability of self-report18,49. We tested 120 trials for each free decision
and cued imagery conditions (40 trials per time point), plus 48 catch trials evenly divided among time points.
Raw priming values were calculated as the number of congruent dominant gratings in binocular rivalry (e.g.,
imagined vertical led to vertical dominant in binocular rivalry) divided by the total number of trials excluding
mixed dominance binocular (piecemeal), for each time point and condition independently. Raw vividness values
were calculated as the average per time point and condition excluding mixed perception trials. Priming and vivid-
ness were normalized as z-score within participants and across time-points and conditions to account for baseline
dierences across participants, but otherwise conserving relative dierences amongst conditions and time-points.
Rivalry dominance self-report reliability was veried with fake rivalry catch trials, where gratings were physically
fused and equally dominant, which were reported as mixed above chance level (83.8%, p = 0.002, one-sample
t-test against baseline). Priming and vividness z-scores were subjected to a one-way ANOVA to detect main the
eects of conditions. We also performed post-hoc two-sample t-tests to verify that priming and vividness scores
diered signicantly between time points (Fig.3).
We tested this independent behavioral experiment on 8 participants from the fMRI experiment (all 14 original
participants were invited but only 8 were able to come back), who had extensive experience as subjects in psy-
chophysics experiments. We further sought to test if these results would generalize to completely inexperienced
participants who did not participate in the fMRI experiment (N = 10). We did not, however, nd a signicant
increase of priming or vividness as a function of time as for results on Fig.S11, suggesting that this is a highly
demanding task and experience in psychophysics might be important to perform the task properly (i.e., being
able to hold the mental image for the duration of the imagery time).
Functional and structural MRI parameters. Scans were performed at the Neuroscience Research
Australia (NeuRA) facility, Sydney, Australia, in a Philips 3T Achieva TX MRI scanner using a 32-channel head
coil. Structural images were acquired using turbo eld echo (TFE) sequence consisting in 256 T1-weighted
sagittal slices covering the whole brain (ip angle 8 deg, matrix size = 256 × 256, voxel size = 1 mm isotropic).
Functional T2*-weighted images were acquired using echo planar imaging (EPI) sequence, with 31 slices (ip
angle = 90 deg, matrix size = 240 × 240, voxel size = 3 mm isotropic, TR = 2000ms, TE = 40 ms).
fMRI perception condition. We presented counter-phase ickering gratings at 4.167 Hz (70% contrast,
~0.5 degrees of visual angle per cycle). ey were presented at their respective predened colors and orientations
(horizontal green/vertical red or vertical green/horizontal red). e gratings were convolved with a Gaussian-like
2D kernel to obtain smooth-edged circular gratings. Gratings were presented inside a rectangle (the same that
was used in the imagery task, Fig.1) and a xation point was drawn at the center (as for the imagery task). Within
a run of 3 minutes, we presented the ickering patterns in a block manner, interleaved with xation periods
(15 seconds each). Importantly, an attention task was performed consisting of detecting a change in xation point
brightness (+70% for 200 ms). Fixation changes were allocated randomly during a run, from 1 to 4 instances.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
11
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
Participants were instructed to press any of the 4 buttons as soon as they detected the changes. Participants
showed high performance in the detection task (d-prime = 3.33 ± 0.13 SEM).
Functional mapping of retinotopic visual areas. To functionally determine the boundaries of visual
areas from V1 to V4 independently for each participant, we used the phase-encoding method50,51. Double
wedges containing dynamic colored patterns cycled through 10 rotations in 10 min (retinotopic stimulation fre-
quency = 0.033 Hz). To ensure deployment of attention to the stimulus during the mapping, participants per-
formed a detection task: pressing a button upon seeing a gray dot anywhere on the wedges.
Experimental procedures. We performed the 3 experiments in a single scanning session lasting about
1.5 h. Stimuli were delivered using an 18” MRI-compatible LCD screen (Philips ERD-2, 60 Hz refresh rate)
located at the end of the bore. All stimuli were delivered and responses gathered employing the Psychtoolbox
352,53 for MATLAB (e MathWorks Inc., Natick, MA, USA) using in-house scripts. Participants’ heads were
restrained using foam pads and adhesive tape. Each session followed the same structure: rst the structural scan-
ning followed by the retinotopic mapping. en the perception task was alternated with the imagery task until
completing 3 runs of the perception task. en the imagery task was repeated until completing 7 or 8 (depending
on the participant) runs in total. Pauses were assigned in between the runs. e 4 rst volumes of each functional
runs were discarded to account for the equilibrium magnetization time and each functional run started with
10 seconds of xation.
Phase-encoded retinotopic mapping analysis. Functional MRI retinotopic mapping data were ana-
lyzed using the Fast-Fourier Transform (FFT) in MATLAB. e FFT was applied voxel-wise across time points.
e complex output of the FFT contained both the amplitude and phase information of sinusoidal components
of the BOLD signal. Phase information at the frequency of stimulation (0.033 Hz) was then extracted, using
its amplitude as threshold (2 SNR) and overlaid them on each participant’s cortical surface reconstruction
obtained using Freesurfer54,55. We manually delineated boundaries between retinotopic areas on the attened sur-
face around the occipital pole by identifying voxels showing phase reversals in the polar angle map, representing
the horizontal and vertical visual meridians. In all participants, we clearly dened ve distinct visual areas: V1,
V2, V3d, V3v and V4; throughout this paper, we merge V3d and V3v and label them as V3. All four retinotopic
labels were then dened as the intersection with the perceptual blocks (grating > xation, p < 0.001, FDR cor-
rected) thus restricting the ROI to the foveal representation of each visual area.
Functional MRI signal processing. All data were analyzed using SPM12 (Statistical Parametric Mapping;
Wellcome Trust Centre for Neuroimaging, London, UK). We realigned functional images to the rst functional
volume and high-pass ltered (128 seconds) to remove low-frequency dris in the signal, with no additional spa-
tial smoothing. To estimate the hemodynamic response function (HRF), we generated regressors for each grating
(horizontal green/vertical red or vertical green/horizontal red) for each run and experiment (perception and
imagery) independently. We used nite-impulse response (FIR) as the basis function. is basis function makes
no assumptions about the shape of the HRF which is important for the analysis of the free decision imagery
data9. We employed a 14th order FIR basis function encompassing 28 seconds from 13 to +13 seconds f rom
the imagery onset, thus obtaining 14 bins representing each TR. For the perception condition, we employed a
1st order FIR basis function from the onset of each perceptual block to its end (15 seconds). We also employed
1st order FIR basis functions for the sanity check imagery decoding (from 0 to 10 s, Fig.S2) and the before-aer
decision perception-imagery generalization (10 to 0 and 0 to 10 from imagery decision, Fig.5). For the vivid-
ness analysis, we split the trials into low-vividness (ratings 1 and 2) and high-vividness (ratings 3 and 4), we then
obtained the regressors for both gratings as explained above.
Multi-voxel pattern analysis (MVPA). We used a well-established decoding approach to extract infor-
mation related to each grating contained in the pattern of activation across voxels of a given participant (in
their “native” anatomical space) using the decoding toolbox (TDT)56. Using a leave-one-run out cross-validation
scheme, we trained a L2-norm regularized linear supporting vector machine (SVM, as implemented in LIBSVM)
on beta values using all but one run and then tested on the remaining one. No additional scaling (normalization)
was performed on the data as beta values represent a scaled version of the data relative to the run mean. Training
and testing was repeated until all runs were used as test and then averaged the results across validations (7 or
8-fold, depending on the participant). We performed leave-one-run out cross validation for every temporal bin
independently.
We also employed cross-classication to generalize information between the perception and the imagery tasks
in the “perception-imagery generalization.
For the perception-imagery cross-classication, we trained on the ensemble of the perception runs and tested
on the ensemble of the imagery runs. In each perception run, green and red gratings were shown pseudorandomly
in 6 blocks of 15 s each. Perceptual blocks (15 s) were convolved with a 1st order FIR lter, yielding regressors for
red and green perceptual gratings, as explained in the previous section. Imagery trials were pre-processed exactly
as in the imagery decoding, yielding time-resolved (2 s) or block (10 s) regressors (see previous section for details).
us, classiers trained on the perceptual runs (e.g., perceptual vertical-green vs perceptual horizontal-red) were
tested on the imagery data (e.g., imagined vertical-green vs imagined horizontal-red). Accuracy was calculated as
in the imagery decoding (e.g., percentage of vertical-green vs horizontal-red decoding accuracy), except for that
the training-testing procedure was performed only once (i.e., all perceptual data was used to train and all imagery
data was used to test the classiers), since it is not necessary to use cross-validation in such cross-classication
schemes as the training and testing data are dierent and independent (as opposed to the imagery decoding con-
dition where a fraction of the data was used for training and another for testing).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
12
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
We employed 2 dierent decoding approaches: searchlight and region-of-interest (ROI). We used a spherical
searchlight of 3 voxels of radius and obtained volumes in which a value of decoding accuracy was assigned to
each voxel. We normalized the decoding accuracy volumes into the MNI space and applied a spatial smoothing
of 8 mm FWHM, which has been found to be optimal in order to account for anatomical idiosyncrasies across
participants57. We then performed a one-tail one-sample t-test against 50% (chance level) across participants for
every voxel. We corrected for multiple comparisons using cluster-extent based thresholding employing Gaussian
Random Field theory58,59, as implemented in FSL60. We used a primary threshold of p < 0.001 at the voxel level,
as recommended in previous studies61, and a cluster level threshold of p < 0.05 in every time point volume inde-
pendently. Importantly, these thresholds have been shown to be valid within the nominal false positive ratios62.
ROI decoding was used to test information content in visual areas specically. We dened the boundaries of
visual areas from V1 to V4 which volumes were used as ROI. Note that because visual ROI were dened on the
cortical surface (see phase-encoded retinotopic analysis for details), only gray-matter containing voxels were
considered, as opposed to the searchlight approach which also considers non-gray matter containing voxels,
potentially explaining dierences on sensitivity between these approaches.
We tested if there was a dierence in the average BOLD response between stimuli (i.e., univariate dierence).
We did not nd any signicant dierences (p > 0.05, uncorrected) in the average BOLD response (Fig.S9), thus
ruling-out the possibility that the results would be explained by dierences in the average level of activity across
conditions.
Permutation test. In order to validate the use of standard parametric statistics, we performed a permuta-
tion test and thus empirically determined the distribution of decoding accuracies under the null hypothesis63.
Previous reports have highlighted the possibility of obtaining skewed decoding distributions, which would inval-
idate the use of standard parametric statistical tests29. We thus randomly shued the labels (i.e., horizontal red/
vertical green) among trials and within blocks (i.e., number of red/green imagined trials was conserved within
a run but trial labels were shued) for each participant and condition (imagery and generalization) to generate
empirical data under the null hypothesis. Aer reshuing the labels, we generated regressors for each stimulus
and performed decoding following the same procedure described in the previous paragraph. We repeated this
procedure 1000 times and obtained the empirical distribution under the null hypothesis. At each iteration, the
second level analysis (across participants) consisted of averaging the results across participants (exactly as per-
formed on the original data), from which we obtained condence intervals for each decoding time point and
area (FigsS4 and S6) using the percentile method63. Our results show that the decoding null hypothesis followed
a normal distribution (TableS2) and importantly, signicant results using permutation test condence intervals
were comparable to the results using standard parametric tests (compare signicant points on Figs2 and 3 with
FigsS4 and S6). is analysis thus validates the use of standard statistical tests to test signicance on our dataset.
Across time-points family-wise error rate (FWER) control. We estimated the probability of obtain-
ing an n number of signicantly above-chance decoding time points (p < 0.05, one tailed t-test) under the null
hypothesis. To do this, we employed the data from the null distribution obtained with the permutation test (ran-
domly shued labels, 1000 iterations; see previous paragraph for details). Fig.S3 shows the result of such analysis.
Insets show the family-wise error rate for the empirically observed number above-chance decoding time points
for each area.
Spillover eect (N-1) decoding control. We conducted a control analysis to directly test whether the
searchlight results could be explained by any spill over from the previous trial, as performed in a previous study
(Soon et al.19). To do this, we shied the labels by one trial (N-1). Briey, the rationale behind this control is the
following: if there was spill over from the previous trial, this analysis should show higher decoding accuracy in
the pre-imagery period as eects from the previous trial would spillover over the next trial (for a comprehensive
explanation of the rationale please refer to Soon et al.19). All the decoding details were otherwise identical to what
is described in the section “Multi-voxel pattern analysis (MVPA)” except for that the rst trial of each run was not
considered as there was no N-1 trial in that case. Analogously, for the perception-imagery generalization, training
was performed on perception data and tested on imagery trials labeled as N-1.
Data Availability
e datasets generated during and/or analysed during the current study are available from the corresponding
author on reasonable request.
References
1. Fodor, J. A. e Modularity of Mind. (MIT Press, 1983).
2. Haynes, J.-D. & ees, G. Decoding mental states from brain activity in humans. Nat. Rev. Neurosci. 7, 523–534 (2006).
3. James, W. e principles of psychology. (Henry Holt and Company, 1890).
4. Pearson, J. & Westbroo, F. Phantom perception: voluntary and involuntary nonretinal vision. Trends Cogn. Sci. 1–7, https://doi.
org/10.1016/j.tics.2015.03.004 (2015).
5. Pearson, J., Naselaris, T., Holmes, E. A. & osslyn, S. M. Mental Imagery: Functional Mechanisms and Clinical Applications. Tren ds
Cogn. Sci. 19, 590–602 (2015).
6. Pearson, J., ademaer, . L. & Tong, F. Evaluating the Mind’s Eye: e Metacognition of Visual Imagery. Psychol. Sci. 22, 1535–1542
(2011).
7. Bo de, S. et al. Tracing the Unconscious Generation of Free Decisions Using Uitra-High Field fMI. PLoS One 6 (2011).
8. Haynes, J. D. et al. eading Hidden Intentions in the Human Brain. Curr. Biol. 17, 323–328 (2007).
9. Soon, C. S., Brass, M., Heinze, H.-J. & Haynes, J.-D. Unconscious determinants of free decisions in the human brain. Nat. Neurosci.
11, 543–545 (2008).
10. Soon, C. S., He, A. H., Bode, S. & Haynes, J.-D. Predicting free choices for abstract intentions. Proc. Natl. Acad. Sci. 110, 6217–6222
(2013).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
13
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
11. Dehaene, S. et al. Cerebral mechanisms of word masing and unconscious repetition priming. Nat. Neurosci. 4, 752–8 (2001).
12. osslyn, S. M. et al. Visual Mental Imagery Activates Topographically Organized Visual Cortex: PET Investigations. J. Cogn.
Neurosci. 5, 263–287 (1993).
13. Naselaris, T., Olman, C. A., Stansbury, D. E., Ugurbil, . & Gallant, J. L. A voxel-wise encoding model for early visual areas decodes
mental images of remembered scenes. Neuroimage, https://doi.org/10.1016/j.neuroimage.2014.10.018 (2015).
14. Haxby, J. V. et al. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425–30
(2001).
15. amitani, Y. & Tong, F. Decoding the visual and subjective contents of the human brain. Nat. Neurosci. 8, 679–685 (2005).
16. riegesorte, N., Goebel, . & Bandettini, P. Information-based functional brain mapping. Proc. Natl. Acad. Sci. USA 103, 3863–8
(2006).
17. Danquah, A. N. et al. (1983) revisited. Conscious. Cogn. 17, 616–627 (2008).
18. Pearson, J., Cliord, C. W. G. & Tong, F. e functional impact of mental imagery on conscious perception. Curr. Biol. 18, 982–6
(2008).
19. Soon, C. S., Allefeld, C., Bogler, C., Heinzle, J. & Haynes, J. D. Predictive brain signals best predict upcoming and not previous
choices. Front. Psychol. 5, 1–3 (2014).
20. Lages, M., Boyle, S. C. & Jaworsa, . Flipping a coin in your head without monitoring outcomes? Comments on predicting free
choices and a demo program. Front. Psychol. 4, 535–540 (2013).
21. Allefeld, C., Soon, C. & Bogler, C. Sequential dependencies between trials in free choice tass. arXiv Prepr. arXiv 1–15 at, http://
arxiv.org/abs/1311.0753 (2013).
22. Harrison, S. A. & Tong, F. Decoding reveals the contents of visual woring memory in early visual areas. Nature 458, 632–635
(2009).
23. eddy, L., Tsuchiya, N. & Serre, T. eading the minds eye: Decoding category information during mental imagery. Neuroimage 50,
818–825 (2010).
2 4. Lee, S.-H., ravitz, D. J. & Baer, C. I. Disentangling visual imagery and perception of real-world objects. Neuroimage 59, 4064–4073
(2012).
25. Stelzer, J., Chen, Y. & Turner, . Statistical inference and multiple testing correction in classication-based multi-voxel pattern
analysis (MVPA): andom permutations and cluster size control. Neuroimage 65, 69–82 (2013).
26 . Albers, A. M. et al. Shared representations for woring memory and mental imagery in early visual cortex. Curr. Biol. 23, 1427–1431
(2013).
27. Vohs, . D. et al. Maing choices impairs subsequent self-control: a limited-resource account of decision maing, self-regulation,
and active initiative. J. Pers. Soc. Psychol. 94, 883–898 (2008).
28. Pearson, J. New Directions in Mental-Imagery esearch: e Binocular-ivalry Technique and Decoding fMI Patterns. Cur r. Di r.
Psychol. Sci. 23, 178–183 (2014).
29. Jamalabadi, H., Alizadeh, S., Schönauer, M., Leibold, C. & Gais, S. Classication based hypothesis testing in neuroscience: Below-
chance level classification rates and overlooed statistical properties of linear parametric classifiers. Hum. Brain Mapp. 37,
1842–1855 (2016).
30. Bannert, M. M. & Bartels, A. Decoding the yellow of a gray banana. Curr. Biol. 23, 2268–2272 (2013).
31 . Cichy, . M., Heinzle, J. & Haynes, J. D. Imagery and perception share cortical representations of content and location. Cereb. Cortex
22, 372–380 (2012).
32. Libet, B., Gleason, Ca, Wright, E. W. & Pearl, D. . Time of Conscious Intention To Act in elation To Onset of Cerebral Activity
(eadiness-Potential). Brain 106, 623–642 (1983).
33. Libet, B. Unconscious cerebral initiative and the role of conscious will in voluntary action. Behav. Brain Sci. 8, 529–539 (1985).
34. de Gee, J. W. et al. Dynamic modulation of decision biases by brainstem arousal systems. Elife 6, 1–36 (2017).
35. Görgen, ., Hebart, M. N., Allefeld, C. & Haynes, J.-D. e same analysis approach: Practical protection against the pitfalls of novel
neuroimaging analysis methods. Neuroimage 1–12, 10.1016/j.neuroimage.2017.12.083 (2017).
36. Lages, M. & Jaworsa, . How predictable are ‘spontaneous decisions’ and ‘hidden intentions’? Comparing classication results
based on previous responses with multivariate pattern analysis of fMI BOLD signals. Front. Psychol. 3, 1–8 (2012).
37. S churger, A., Sitt, J. D. & Dehaene, S. An accumulator model for spontaneous neural activity prior to self-initiated movement. Proc.
Natl. Acad. Sci. 109, E2904–E2913 (2012).
38. S chultze-ra, M. et al. e point of no return in vetoing self-initiated movements. Proc. Natl. Acad. Sci. 113, 1080–1085 (2016).
39. Stoerig, P. & Cowey, A. Blindsight in man and money. Brain 120(Pt 3), 535–59 (1997).
40. Dehaene, S. et al. Imaging unconscious semantic priming. Nature 395, 597–600 (1998).
41. Dell’Acqua, . & Grainger, J. Unconscious semantic priming from pictures. Cognition 73, B1–B15 (1999).
4 2. Vlassova, A., Donin, C. & Pearson, J. Unconscious information changes decision accuracy but not condence. Proc. Natl. Acad. Sci.
111, 16214–16218 (2014).
43. Purdon, C. & Clar, D. A. Obsessive intrusive thoughts in nonclinical subjects. Part I. Content and relation with depressive, anxious
and obsessional symptoms. Behav. Res. er. 31, 713–720 (1993).
44. Brewin, C. ., Gregory, J. D., Lipton, M. & Burgess, N. Intrusive images in psychological disorders: Characteristics, neural
mechanisms, and treatment implications. Psychol. Rev. 117, 210–232 (2010).
45. Dijstra, N., Bosch, S. E. & van Gerven, M. A. J. J. Vividness of Visual Imagery Depends on the Neural Overlap with Perception in
Visual Areas. J. Neurosci. 37, 1367–1373 (2017).
46. Erdfelder, E., FAul, F., Buchner, A. & Lang, A. G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression
analyses. Behav. Res. Methods 41, 1149–1160 (2009).
47. Chang, S., Lewis, D. E. & Pearson, J. e functional eects of color perception and color imagery. J. Vis. 13(10), 1–10 (2013).
48. eogh, . & Pearson, J. Mental Imagery and Visual Woring Memory. PLoS One 6, e29221 (2011).
49. eogh, . & Pearson, J. e sensory strength of voluntary visual imagery predicts visual woring memory capacity. J. Vis. 14, 7–7
(2014).
50. Sereno, M. I. et al. Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science 268,
889–93 (1995).
51. Warning, J. et al. fMI retinotopic mapping–step by step. Neuroimage 17, 1665–83 (2002).
52. Brainard, D. H. e Psychophysics Toolbox. Spat. Vis. 10, 433–6 (1997).
53. Pelli, D. G. e VideoToolbox soware for visual psychophysics: transforming numbers into movies. Spat. Vis. 10, 437–42 (1997).
54. Dale, A. M., Fischl, B. & Sereno, M. I. Cortical surface-based analysis. I. Segmentation and surface reconstruction. Neuroimage 9,
179–94 (1999).
5 5. Fischl, B., Sereno, M. I. & Dale, A. M. Cortical surface-based analysis. II: Ination, attening, and a surface-based coordinate system.
Neuroimage 9, 195–207 (1999).
56. Hebart, M. N., Görgen, . & Haynes, J.-D. e Decoding Toolbox (TDT): a versatile soware pacage for multivariate analyses of
functional imaging data. Front. Neuroinform. 8, 88 (2014).
57. Mil, M. et al. Eects of spatial smoothing on fMI group inferences. Magn. Reson. Imaging 26, 490–503 (2008).
58. Friston, . J., Worsley, . J., Fracowia, . S. J., Mazziotta, J. C. & Evans, A. C. Assessing the signicance of focal activations using
their spatial extent. Hum. Brain Mapp. 1, 210–220 (1994).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
14
Scientific REPORTS | (2019) 9:3504 | https://doi.org/10.1038/s41598-019-39813-y
www.nature.com/scientificreports
www.nature.com/scientificreports/
59. Worsley, . J. et al. A unied statistical approach for determining signicant signals in images of cerebral activation. Hum. Brain
Mapp. 4, 58–73 (1996).
60. Smith, S. M. et al. Advances in functional and structural M image analysis and implementation as FSL. Neuroimage 23, S208–S219
(2004).
61. Woo, C.-W., rishnan, A. & Wager, T. D. Cluster-extent based thresholding in fMI analyses: Pitfalls and recommendations.
Neuroimage 91, 412–419 (2014).
62. Elund, A., Nichols, T. E. & nutsson, H. Cluster failure: Why fMI inferences for spatial extent have inated false-positive rates.
Proc. Natl. Acad. Sci. 113, 7900–7905 (2016).
63. Go o d, P. Permutation, Parametric and Bootstrap Tests of Hypotheses., https://doi.org/10.1007/b138696 (Springer-Verlag, 2005).
Acknowledgements
We would like to thank Johanna Bergmann for her input in the experimental design, useful comments and help
with participant testing. Eugene Kwok for his help in the behavioral testing. Collin Cliord, Damien Mannion and
Kiley Seymour for useful comments. is research was supported by Australian NHMRC grants GNT1046198
and GNT1085404 and JP was supported by a Career Development Fellowship GNT1049596 and ARC discovery
projects DP140101560 and DP160103299.
Author Contributions
All authors developed the study concept and design. Testing, data collection, and data analysis were performed
by R.K. Data interpretation was done by all authors. All authors wrote and approved the nal version of the
manuscript for submission.
Additional Information
Supplementary information accompanies this paper at https://doi.org/10.1038/s41598-019-39813-y.
Competing Interests: e authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Cre-
ative Commons license, and indicate if changes were made. e images or other third party material in this
article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons license and your intended use is not per-
mitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the
copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
© e Author(s) 2019
Content courtesy of Springer Nature, terms of use apply. Rights reserved
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com

Supplementary resource (1)

... So far, neuroimaging investigations of the EVC have focused on visual rather than motor imagery, and the results have been controversial, with some studies showing BOLD responses above baseline in the EVC (Chen et al., 1998;Craven and Kanwisher, 2000;Dijkstra et al., 2017;Ganis et al., 2004;Ishai et al., 2002;Klein et al., 2000;Lambert et al., 2002;Le Bihan et al., 1993;Sabbah et al., 1995), while others did not (D'Esposito et al., 1997;Formisano et al., 2002;Knauff et al., 2000;Trojano et al., 2000;Wheeler and Petersen, 2000). Regardless of the involvement of the EVC in perceptual imagery, visual imagery content can be decoded from the EVC even when activation is at baseline (Albers et al., 2013;Dijkstra et al., 2017;Koenig-Robert and Pearson, 2019;Naselaris et al., 2015), and most intriguingly, there are common patterns of activity that are shared between perception and visual imagery (Albers et al., 2013;Naselaris et al., 2015;. ...
Chapter
Recent evidence shows that the role of the early visual cortex (EVC) goes beyond visual processing and into higher cognitive functions (Roelfsema and de Lange in Annu. Rev. Vis. Sci. 2:131–151, 2016). Further, neuroimaging results indicate that action intention can be predicted based on the activity pattern in the EVC (Gallivan et al. in Cereb. Cortex 29:4662–4678, 2019; Gutteling et al. in J. Neurosci. 35:6472–6480, 2015). Could it just be imagery? Further, can we decode action intention in the EVC based on activity patterns elicited by motor imagery, and vice versa? To answer this question, we explored whether areas implicated in hand actions and imagery tasks have a shared representation for planning and imagining hand movements. We used a slow event-related functional magnetic resonance imaging (fMRI) paradigm to measure the BOLD signal while participants (\(N=16\)) performed or imagined performing actions with the right dominant hand towards an object, which consisted of a small shape attached on a large shape. The actions included grasping the large or small shape, and reaching to the center of the object while fixating a point above the object. At the beginning of each trial, an auditory cue instructed participants about the task (Imagery, Movement) and the action (Grasp large, Grasp small, Reach) to be performed at the end of the trial. After a 10-s delay, which included a planning phase in Movement trials, a go cue prompted the participants to perform or imagine performing the action (Go phase). We used standard retinotopic mapping procedures to localize the retinotopic location of the object in the EVC. Using multi-voxel pattern analysis, we decoded action type based on activity patterns elicited during the planning phase of real actions (Movement task) as well as in the Go phase of the Imagery task in the anterior intraparietal sulcus (aIPS) and in the EVC. In addition, we decoded imagined actions based on the activity pattern of planned actions (and vice-versa) in aIPS, but not in EVC. Our results suggest a shared representation for planning and imagining specific hand movements in aIPS but not in low-level visual areas. Therefore, planning and imagining actions have overlapping but not identical neural substrates.
... It was also noted that out of 10 million bits of information received by the brain per second, only 50 bits are processed in our conscious mind [35]. Finally, it was observed that the human brain decides unconsciously up to 11 seconds before people are aware of it [36]. This is why this paper wants to examine whether it is time for an evolution of CSR to CSR 2.0, which would be based on neuroscience, or more precisely, whether it is time for neuroCSR, which would communicate CSR messages in a more brain-friendly way. ...
Article
Full-text available
The majority of studies evaluating the effectiveness of branded CSR campaigns are concentrated and base their conclusions on data collection through self-reporting questionnaires. Although such studies provide insights for evaluating the effectiveness of CSR communication methods, analysing the message that is communicated, the communication channel used and the explicit brain responses of those for whom the message is intended, they lack the ability to fully encapsulate the problem of communicating environmental messages by not taking into consideration what the recipients’ implicit brain reactions are presenting. Therefore, this study aims to investigate the effectiveness of CSR video communications relating to environmental issues through the lens of the recipients’ implicit self, by employing neuroscience-based assessments. For the examination of implicit brain perception, an electroencephalogram (EEG) was used, and the collected data was analysed through three indicators identified as the most influential indicators on human behaviour. These three indicators are emotional valence, the level of brain engagement and cognitive load. The study is conducted on individuals from the millennial generation in Thessaloniki, Greece, whose implicit brain responses to seven branded commercial videos are recorded. The seven videos were a part of CSR campaigns addressing environmental issues. Simultaneously, the self-reporting results from the participants were gathered for a comparison between the explicit and implicit brain responses. One of the key findings of the study is that the explicit and implicit brain responses differ to the extent that the CSR video communications’ brain friendliness has to be taken into account in the future, to ensure success. The results of the study provide an insight for the future creation process, conceptualisation, design and content of the effective CSR communication, in regard to environmental issues.
... Engaging in imagery is associated with activation in the relevant sensory cortex (e.g., visual imagery tied with visual cortex) and this has been shown for visual (Cattaneo et al., 2011;Cui et al., 2007;Daselaar et al., 2010;Dijkstra et al., 2017;Ganis et al., 2004;Sparing et al., 2002), auditory (Daselaar et al., 2010;Zvyagintsev et al., 2013), olfactory (Djordjevic et al., 2005;Leclerc et al., 2019;Plailly et al., 2012), gustatory (Belardinelli et al., 2009;Kobayashi et al., 2004Kobayashi et al., , 2011, tactile (Schmidt et al., 2014;Yoo et al., 2003), and motor imagery (Grèzes & Decety, 2000;Hanakawa et al., 2008;Hétu et al., 2013). Here, some studies showed a positive correlation between sensory cortex activation and imagery vividness (Belardinelli et al., 2009;Cui et al., 2007;Herholz et al., 2012), while others showed that the content of visual imagery can be decoded from visual cortex activity using multivariate decoding in fMRI (Albers et al., 2013;Koenig-Robert & Pearson, 2019;Naselaris et al., 2015). Similarly, low imagers rely less on visual cortex compared to high imagers when asked to complete visual imagery tasks (mental rotation; Logie et al., 2011), and studies using TMS show that engaging in visual imagery is associated with increased excitation in visual cortex (Cattaneo et al., 2011;Sparing et al., 2002). ...
Article
Full-text available
People with aphantasia have impoverished visual imagery so struggle to form mental pictures in the mind's eye. By testing people with and without aphantasia, we investigate the relationship between sensory imagery and sensory sensitivity (i.e., hyper- or hypo-reactivity to incoming signals through the sense organs). In Experiment 1 we first show that people with aphantasia report impaired imagery across multiple domains (e.g., olfactory, gustatory etc.) rather than simply vision. Importantly, we also show that imagery is related to sensory sensitivity: aphantasics reported not only lower imagery, but also lower sensory sensitivity. In Experiment 2, we showed a similar relationship between imagery and sensitivity in the general population. Finally, in Experiment 3 we found behavioural corroboration in a Pattern Glare Task, in which aphantasics experienced less visual discomfort and fewer visual distortions typically associated with sensory sensitivity. Our results suggest for the very first time that sensory imagery and sensory sensitivity are related, and that aphantasics are characterised by both lower imagery, and lower sensitivity. Our results also suggest that aphantasia (absence of visual imagery) may be more accurately defined as a subtype of a broader imagery deficit we name dysikonesia, in which weak or absent imagery occurs across multiple senses.
Article
Accumulating multivariate pattern analysis (MVPA) results from fMRI studies suggest that information is represented in fingerprint patterns of activations and deactivations during perception, emotions, and cognition. We postulate that these fingerprint patterns might reflect neuronal-population level sparse code documented in two-photon calcium imaging studies in animal models, i.e., information represented in specific and reproducible ensembles of a few percent of active neurons amidst widespread inhibition in neural populations. We suggest that such representations constitute a fundamental organizational principle via interacting across multiple levels of brain hierarchy, thus giving rise to perception, emotions, and cognition.
Article
The present study investigated whether different types of motor imageries can be classified based on the location of the activation peaks or the multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) and compared the difference between visual motor imagery (VI) and kinesthetic motor imagery (KI). During fMRI scanning sessions, 25 participants imagined four movements included in the Motor Imagery Questionnaire-Revised (MIQ-R): knee lift, jump, arm movement, and waist bend. These four imagined movements were then classified based on the peak location or the patterns of fMRI signal values. We divided the participants into two groups based on whether they found it easier to generate VI (VI group, n = 10) or KI (KI group, n = 15). Our results show that the imagined movements can be classified using both the location of the activation peak and the spatial activation patterns within the sensorimotor cortex, and MVPA performs better than the activation peak classification. Furthermore, our result reveals that the KI group achieved a higher MVPA decoding accuracy within the left primary somatosensory cortex than the VI group, suggesting that the modality of motor imagery differently affects the classification performance in distinct brain regions.
Article
Top-down spatial attention enhances cortical representations of behaviorally relevant visual information and increases the precision of perceptual reports. However, little is known about the relative precision of top-down attentional modulations in different visual areas, especially compared to the highly precise stimulus-driven responses that are observed in early visual cortex. For example, the precision of attentional modulations in early visual areas may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas in prefrontal cortex that generate and relay the top-down signals. Here, we used fMRI and human participants to assess the precision of bottom-up spatial representations evoked by high contrast stimuli across the visual hierarchy. Then, we examined the relative precision of top-down attentional modulations in the absence of spatially-specific bottom-up drive. While V1 showed the largest relative difference between the precision of top-down attentional modulations and the precision of bottom-up modulations, mid-level areas such as V4 showed relatively smaller differences between the precision of top-down and bottom-up modulations. Overall, this interaction between visual areas (e.g. V1 vs V4) and the relative precision of top-down and bottom-up modulations suggests that the precision of top-down attentional modulations is limited by the representational fidelity of areas that generate and relay top-down feedback signals.
Chapter
In this chapter, I focus on defining freedom of thought and conscience (Section “New Threats to Freedom of Thought and Conscience”) and the new neuroscientific technologies and devices apt to read our mind/brain (Sect. “Neuroscience Crossing the Final Frontier”). In Sect. “New Technologies that Influence Cognitive Processes and Mental Contents”, I explain how digital technologies and devices can influence cognitive processes and mental contents. When exposed to similar stimuli, the “common brain” of human beings may end up becoming very similar among individuals. In this sense, digital technology might be not as neutral as it looks. In Sect. “The Need for and Right to Cognitive Freedom”, I provide a definition of mental integrity and argue why there is a need for a right to cognitive freedom. In Sect. “Using Technology as a Defence Against Technology Itself”, I maintain that we should defend mental integrity by incorporating functional limitations into devices capable of interfering with it.
Article
Is consciousness—the subjective awareness of the sensations, perceptions, beliefs, desires, and intentions of mental life—a genuine cause of human action or a mere impotent epiphenomenon accompanying the brain’s physical activity but utterly incapable of making anything actually happen? This article will review the history and current status of experiments and commentary related to Libet’s influential paper (Brain 106:623–664, 1983) whose conclusion “that cerebral initiation even of a spontaneous voluntary act …can and usually does begin unconsciously” has had a huge effect on debate about the efficacy of conscious intentions. Early (up to 2008) and more recent (2008 on) experiments replicating and criticizing Libet’s conclusions and especially his methods will be discussed, focusing especially on recent observations that the readiness potential (RP) may only be an “artifact of averaging” and that, when intention is measured using “tone probes,” the onset of intention is found much earlier and often before the onset of the RP. Based on these findings, Libet’s methodology was flawed and his results are no longer valid reasons for rejecting Fodor’s “good old commonsense belief/desire psychology” that “my wanting is causally responsible for my reaching.”.
Article
Full-text available
Relationship-based approaches to leadership represent one of the fastest-growing leadership fields and help us to understand better organizational leadership. Relation-based approaches emphasize the relationship and interaction between the leader and the follower. The emphasis is placed on the way that they interact and influence each other at attaining mutual goals. It is known that leaders are linked to followers and vice versa in a sense of responding to other's needs toward the achievement of mutual goals. Leaders and followers are an essential part of this social process implying that they are losing their traditional identity rooted in the formal organizational structure (manager-subordinate) and become inseparable actors of a co-constructing process of leadership. What is less known though is the way that leadership actors are linked to each other and in particular how they try to understand how to do that in the workplace. What is even less understood is the importance and role of consciousness in this relationship. Especially since consciousness appears to be both a fundamental and a very elusive element in human relations. Therefore, this paper conceptually explores the concept of consciousness within the context of the social brain theory to argue that leadership actors need to rethink their approach to individuality and focus on mutually dependent relations with each other. This paper contributes to the field of Neuro-management by introducing the concept of Homo Relationalis. In this respect, we suggest that leadership is not just a socially constructed element but also a social brain constructed phenomenon that requires an understanding of the human brain as a social organ. We further recommend a new approach of applying cognitive style analysis to capture the duality of leader/follower in the same person, following the self-illusion theory. Finally, we conclude that we need to further emphasize a social brain-adjusted relational leadership approach and we introduce two new cognitive styles that can help capture the essence of it.
Article
Full-text available
Decision-makers often arrive at different choices when faced with repeated presentations of the same evidence. Variability of behavior is commonly attributed to noise in the brain&apos;s decision-making machinery. We hypothesized that phasic responses of brainstem arousal systems are a significant source of this variability. We tracked pupil responses (a proxy of phasic arousal) during sensory-motor decisions in humans, across different sensory modalities and task protocols. Large pupil responses generally predicted a reduction in decision bias. Using fMRI, we showed that the pupil-linked bias reduction was (i) accompanied by a modulation of choice-encoding pattern signals in parietal and prefrontal cortex and (ii) predicted by phasic, pupil-linked responses of a number of neuromodulatory brainstem centers involved in the control of cortical arousal state, including the noradrenergic locus coeruleus. We conclude that phasic arousal suppresses decision bias on a trial-by-trial basis, thus accounting for a significant component of the variability of choice behavior.
Article
Full-text available
Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two previously unknown sources of error: A mismatch between counterbalancing and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance.
Article
Full-text available
The most widely used task functional magnetic resonance imaging (fMRI) analyses use parametric statistical methods that depend on a variety of assumptions. In this work, we use real resting-state data and a total of 3 million random task group analyses to compute empirical familywise error rates for the fMRI software packages SPM, FSL, and AFNI, as well as a nonparametric permutation method. For a nominal familywise error rate of 5%, the parametric statistical methods are shown to be conservative for voxelwise inference and invalid for clusterwise inference. Our results suggest that the principal cause of the invalid cluster inferences is spatial autocorrelation functions that do not follow the assumed Gaussian shape. By comparison, the nonparametric permutation test is found to produce nominal results for voxelwise as well as clusterwise inference. These findings speak to the need of validating the statistical methods being used in the field of neuroimaging.
Article
In man and monkey, absolute cortical blindness is caused by destruction of the optic radiations and/or the primary visual cortex. It is characterized by an absence of any conscious vision, but stimuli presented inside its borders may nevertheless be processed. This unconscious vision includes neuroendocrine, reflexive, indirect and forced-choice responses which are mediated by the visual subsystems that escape the direct cerebral damage and the Ensuring degeneration. While extrastriate cortical areas participate in the mediation of the forced-choice responses, a concomitant striate cortical activation does not seem to be necessary for blindsight. Whether the loss of phenomenal vision is a necessary consequence of striate cortical destruction and whether this structure is indispensable for conscious sight are much debated questions which need to be tackled experimentally.
Article
Significance statement: Visual imagery is the ability to visualise objects that are not in our direct line of sight; something that is important for memory, spatial reasoning and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is.
Article
Multivariate pattern analysis (MVPA) has recently become a popular tool for data analysis. Often, classification accuracy as quantified by correct classification rate (CCR) is used to illustrate the size of the effect under investigation. However, we show that in low sample size (LSS), low effect size (LES) data, which is typical in neuroscience, the distribution of CCRs from cross-validation of linear MVPA is asymmetric and can show classification rates considerably below what would be expected from chance classification. Conversely, the mode of the distribution in these cases is above expected chance levels, leading to a spuriously high number of above chance CCRs. This unexpected distribution has strong implications when using MVPA for hypothesis testing. Our analyses warrant the conclusion that CCRs do not well reflect the size of the effect under investigation. Moreover, the skewness of the null-distribution precludes the use of many standard parametric tests to assess significance of CCRs. We propose that MVPA results should be reported in terms of p-values, which are estimated using randomization tests. Also, our results show that cross-validation procedures using a low number of folds, e.g. 2-fold, are generally more sensitive, even though the average CCRs are often considerably lower than those obtained using a higher number of folds.