Neural Decoding of Visual Imagery During Sleep

ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan.
Science (Impact Factor: 33.61). 04/2013; 340(6132). DOI: 10.1126/science.1234330
Source: PubMed


Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here, we present a neural decoding approach in which machine learning models predict the contents of visual imagery during the sleep onset period given measured brain activity, by discovering links between human fMRI patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

Download full-text


Available from: Masako Tamaki, Jun 21, 2014
182 Reads
    • "These contradictions were reconciled by a meta-analysis that suggested V1 activation might depend on the particular visual feature and degree of spatial resolution in the mental image (Kosslyn and Thompson 2003). In line with this, several studies have managed to decode mental images from early visual cortex fMRI BOLD response (Albers et al. 2013; Horikawa et al. 2013; Schlegel et al. 2013). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Despite mental imagery's ubiquitous role in human perception, cognition and behavior, one standout question remains unanswered: Why does imagery vary so much from one individual to the next? Here, we used a behavioral paradigm that measures the functional impact of a mental image on subsequent conscious perception and related these measures to the anatomy of the early visual cortex estimated by fMRI retinotopic mapping. We observed a negative relationship between primary visual cortex (V1) surface area and sensory imagery strength, but found positive relationships between V1 and imagery precision (spatial location and orientation). Hence, individuals with a smaller V1 tended to have stronger, but less precise imagery. In addition, subjective vividness of imagery was positively related to prefrontal cortex volume, but unrelated to V1 anatomy. Our findings present the first evidence for the importance of the V1 layout in shaping the strength of human imagination. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail:
    Cerebral Cortex 08/2015; DOI:10.1093/cercor/bhv186
  • Source
    • "MVPC studies that targeted high-order visual areas have shown similarity between activity patterns in those visual areas as well (Stokes et al., 2009; Reddy et al., 2010; Johnson and Johnson, 2014). Results from MVPC studies investigating visual working memory (Harrison and Tong, 2009; Xing et al., 2013), and dreaming (Horikawa et al., 2013) also support the notion that patterns of activity generated during mental imagery and perception are similar in some way. The finding that patterns of activity in early visual cortex during imagery are similar to patterns of activity during perception implies—but does not directly demonstrate—that low-level visual features are represented in both imagery and perception. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from low-level visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain-machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery.
    NeuroImage 10/2014; 105. DOI:10.1016/j.neuroimage.2014.10.018
  • Source
    • "For instance, there are now many more tools to study the interplay between language , literacy, introspection and lateralization . The featured study opens especially useful paths for the recent boom in the decoding of neural activity, through which abstract concepts can be directly measured (Kay et al., 2008; Mitchell et al., 2008; Horikawa et al., 2013). Cortical semantic representations were recently found to be warped by attention in a holistic manner (Çukur et al., 2013). "
    Frontiers in Neuroscience 08/2014; 8(8):249. DOI:10.3389/fnins.2014.00249
Show more

We use cookies to give you the best possible experience on ResearchGate. Read our cookies policy to learn more.