Article

Neural Decoding of Visual Imagery During Sleep

ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan.
Science (Impact Factor: 33.61). 04/2013; 340(6132). DOI: 10.1126/science.1234330
Source: PubMed

ABSTRACT

Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here, we present a neural decoding approach in which machine learning models predict the contents of visual imagery during the sleep onset period given measured brain activity, by discovering links between human fMRI patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

Download full-text

Full-text

Available from: Masako Tamaki, Jun 21, 2014
  • Source
    • "Using a decoding method, the visual object category was successfully classified with the ECoG signals (Majima et al., 2014). Moreover, even non-invasive signals, such as functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG), were successfully decoded to infer the presented images of arbitrary characters, the contents of dreaming (Kamitani and Tong, 2005; Miyawaki et al., 2008; Horikawa et al., 2013), and the visual object category (Martin et al., 1996; Gauthier et al., 2000; Carlson et al., 2003; Kiani et al., 2007; Kriegeskorte et al., 2008; DiCarlo et al., 2012; Van de Nieuwenhuijzen et al., 2013; Cichy et al., 2014). The decoding method reveals how visual information was encoded and how it is processed in the brain (Peelen and Downing, 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Humans recognize body parts in categories. Previous studies have shown that responses in the fusiform body area (FBA) and extrastriate body area (EBA) are evoked by the perception of the human body, when presented either as whole or as isolated parts. These responses occur approximately 190 ms after body images are visualized. The extent to which body-sensitive responses show specificity for different body part categories remains to be largely clarified. We used a decoding method to quantify neural responses associated with the perception of different categories of body parts. Nine subjects underwent measurements of their brain activities by magnetoencephalography (MEG) while viewing 14 images of feet, hands, mouths, and objects. We decoded categories of the presented images from the MEG signals using a support vector machine (SVM) and calculated their accuracy by 10-fold cross-validation. For each subject, a response that appeared to be a body-sensitive response was observed and the MEG signals corresponding to the three types of body categories were classified based on the signals in the occipitotemporal cortex. The accuracy in decoding body-part categories (with a peak at approximately 48%) was above chance (33.3%) and significantly higher than that for random categories. According to the time course and location, the responses are suggested to be body-sensitive and to include information regarding the body-part category. Finally, this non-invasive method can decode category information of a visual object with high temporal and spatial resolution and this result may have a significant impact in the field of brain–machine interface research.
    Full-text · Article · Nov 2015 · Frontiers in Human Neuroscience
  • Source
    • "MVPC studies that targeted high-order visual areas have shown similarity between activity patterns in those visual areas as well (Stokes et al., 2009; Reddy et al., 2010; Johnson and Johnson, 2014). Results from MVPC studies investigating visual working memory (Harrison and Tong, 2009; Xing et al., 2013), and dreaming (Horikawa et al., 2013) also support the notion that patterns of activity generated during mental imagery and perception are similar in some way. The finding that patterns of activity in early visual cortex during imagery are similar to patterns of activity during perception implies—but does not directly demonstrate—that low-level visual features are represented in both imagery and perception. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from low-level visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain-machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery.
    Full-text · Article · Oct 2014 · NeuroImage
  • Source
    • "I am almost never the dispassionate, detached voyeur of this particular dream scenario. But, I must admit this was my dream – so I need to take responsibility for its content Many of the formal properties of dreams can be reliably recognized and measured, allowing subjective experience to be correlated with physiological or behavioral responses: for example , in sensory perception (Hong et al., 1997, 2009; Horikawa et al., 2013), visual scanning (Roffwarg et al., 1962; Herman et al., 1984; Hong et al., 1995), motor control (Dresler et al., 2012), language (Hong et al., 1996), multisensory binding (Llinas and Ribary, 1993; Hong et al., 2009), and the organization of intrinsic brain networks (Koike et al., 2011). Furthermore, considerable experience has been accrued using sleep lab reports and homebased data (Hobson, 1988, 1999, 2002; Hobson and Stickgold, 1994; Kahn and Hobson, 1994, 2005; Resnick et al., 1994; Rittenhouse et al., 1994; Stickgold et al., 1994a,b, 2001; Sutton et al., 1994a,b). "
    [Show abstract] [Hide abstract]
    ABSTRACT: This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research.
    Full-text · Article · Oct 2014 · Frontiers in Psychology
Show more