Article

Neural Decoding of Visual Imagery During Sleep.

ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan.
Science (Impact Factor: 31.48). 04/2013; 340(6132). DOI: 10.1126/science.1234330
Source: PubMed

ABSTRACT Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here, we present a neural decoding approach in which machine learning models predict the contents of visual imagery during the sleep onset period given measured brain activity, by discovering links between human fMRI patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

Download full-text

Full-text

Available from: Masako Tamaki, Jun 21, 2014
3 Followers
 · 
406 Views
  • Source
    • "MVPC studies that targeted high-order visual areas have shown similarity between activity patterns in those visual areas as well (Stokes et al., 2009; Reddy et al., 2010; Johnson and Johnson, 2014). Results from MVPC studies investigating visual working memory (Harrison and Tong, 2009; Xing et al., 2013), and dreaming (Horikawa et al., 2013) also support the notion that patterns of activity generated during mental imagery and perception are similar in some way. The finding that patterns of activity in early visual cortex during imagery are similar to patterns of activity during perception implies—but does not directly demonstrate—that low-level visual features are represented in both imagery and perception. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from low-level visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain-machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery.
    NeuroImage 10/2014; 105. DOI:10.1016/j.neuroimage.2014.10.018 · 6.36 Impact Factor
  • Source
    • "For instance, there are now many more tools to study the interplay between language , literacy, introspection and lateralization . The featured study opens especially useful paths for the recent boom in the decoding of neural activity, through which abstract concepts can be directly measured (Kay et al., 2008; Mitchell et al., 2008; Horikawa et al., 2013). Cortical semantic representations were recently found to be warped by attention in a holistic manner (Çukur et al., 2013). "
    Frontiers in Neuroscience 08/2014; 8:249. DOI:10.3389/fnins.2014.00249 · 3.70 Impact Factor
  • Source
    • "While, in principal, distributed patterns can be revealed by multivariate statistical techniques (e.g., partial least squares Krishnan et al. 2011), there was an almost immediate interest to employ these patterns using tools from machine learning that can then read out mental state from previously unseen data (Cox and Savoy 2003; Pereira et al. 2009). A multitude of interesting literature has demonstrated that it is possible to train data-driven models that can subsequently decode information from the subject's brain images; for example semantic meaning of words (Mitchell et al. 2008), emotional prosody pronounced by actors (Ethofer et al. 2009), or more recently visual imagery (Nishimoto et al. 2011) and even attempts to decode dreams (Horikawa et al. 2013). These developments have also started a promising avenue for clinical neuroimaging. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Many diseases are associated with systematic modifications in brain morphometry and function. These alterations may be subtle, in particular at early stages of the disease progress, and thus not evident by visual inspection alone. Group-level statistical comparisons have dominated neuroimaging studies for many years, proving fascinating insight into brain regions involved in various diseases. However, such group-level results do not warrant diagnostic value for individual patients. Recently, pattern recognition approaches have led to a fundamental shift in paradigm, bringing multivariate analysis and predictive results, notably for the early diagnosis of individual patients. We review the state-of-the-art fundamentals of pattern recognition including feature selection, cross-validation and classification techniques, as well as limitations including inter-individual variation in normal brain anatomy and neurocognitive reserve. We conclude with the discussion of future trends including multi-modal pattern recognition, multi-center approaches with data-sharing and cloud-computing.
    Brain Topography 03/2014; 27. DOI:10.1007/s10548-014-0360-z · 2.52 Impact Factor
Show more