Article

Reconstructing visual experiences from brain activity evoked by natural movies.

Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA.
Current biology: CB (Impact Factor: 9.92). 09/2011; 21(19):1641-6. DOI: 10.1016/j.cub.2011.08.031
Source: PubMed

ABSTRACT Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1, 2] and can form the basis for brain decoding devices [3-5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6-8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10, 11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.

Download full-text

Full-text

Available from: An Thanh Vu, Jul 02, 2015
0 Followers
 · 
224 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Core communication research questions are increasingly being investigated using brain imaging techniques. A majority of these studies apply a functional magnetic resonance imaging (fMRI) approach. This trend raises two important questions that we address in this article. First, under what conditions can fMRI methodology increase knowledge and refine communication theory? Second, how can editors, reviewers, and readers of communication journals discriminate sound and relevant fMRI research from unsound or irrelevant fMRI research? To address these questions, we first discuss what can and cannot be accomplished with fMRI. Subsequently, we provide a pragmatic introduction to fMRI data collection and analysis for social-science-oriented communication scholars. We include practical guidelines and a checklist for reporting and evaluating fMRI studies.
    Communication Methods and Measures 03/2015; 9(1-2):5-29. DOI:10.1080/19312458.2014.999754
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW) model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2, and V3. However, BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.
    Frontiers in Computational Neuroscience 02/2015; 8(168):1-10. DOI:10.3389/fncom.2014.00168 · 2.23 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from low-level visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain-machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery.
    NeuroImage 10/2014; 105. DOI:10.1016/j.neuroimage.2014.10.018 · 6.13 Impact Factor