Reconstructing visual experiences from brain activity evoked by natural movies.

Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA.
Current biology: CB (Impact Factor: 10.99). 09/2011; 21(19):1641-6. DOI: 10.1016/j.cub.2011.08.031
Source: PubMed

ABSTRACT Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1, 2] and can form the basis for brain decoding devices [3-5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6-8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10, 11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Most existing Music Information Retrieval (MIR) technologies require a user to use a query interface to search for a musical document. The mental image of the desired music is likely much richer than what the user is able to express through any query interface. This expressivity bottleneck could be circumvented if it was possible to directly read the music query from the user's mind. To the authors' knowledge, no such attempt has been made in the field of MIR so far. However, there have been recent advances in cognitive neuroscience that suggest such a system might be possible. Given these new insights, it seems promising to extend the focus of MIR by including music imagery - possibly forming a sub-discipline which could be called Music Imagery Information Retrieval (MIIR). As a first effort, there has been a dedicated session at the Late-Breaking & Demos event at the ISMIR 2012 conference. This paper aims to stimulate research in the field of MIIR by laying a roadmap for future work.
    13th International Conference on Music Information Retrieval (ISMIR'12) - Late-Breaking & Demo Papers; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As a technology to read brain states from measurable brain activities, brain decoding are widely applied in industries and medical sciences. In spite of high demands in these applications for a universal decoder that can be applied to all individuals simultaneously, large variation in brain activities across individuals has limited the scope of many studies to the development of individual-specific decoders. In this study, we used deep neural network (DNN), a nonlinear hierarchical model, to construct a subject-transfer decoder. Our decoder is the first successful DNN-based subject-transfer decoder. When applied to a large-scale functional magnetic resonance imaging (fMRI) database, our DNN-based decoder achieved higher decoding accuracy than other baseline methods, including support vector machine (SVM). In order to analyze the knowledge acquired by this decoder, we applied principal sensitivity analysis (PSA) to the decoder and visualized the discriminative features that are common to all subjects in the dataset. Our PSA successfully visualized the subject-independent features contributing to the subject-transferability of the trained decoder.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In natural stimulus fMRI during video watching, it is natural to postulate that a human participant's attention system would respond to shot changes of the video stream. However, quantitative assessment of the relationship between the functional activities of the attention system and the dynamics of video shot changes has been rarely explored yet. This paper presents a novel framework for modeling the functional interactions and dynamics within the human attention system via natural stimulus fMRI and learning fMRI-based brain response predictors of video shot changes. The basic idea is to derive sub-networks from the attention system and correlate the functional synchronization measurements of these sub-networks with video shot changes. Then, the most relevant sub-networks are identified from training samples and a regression model is constructed as the predictor of video shot changes. In the application stage, the learned predictive models demonstrated good accuracy of estimating video shot changes in independent testing datasets. This study suggests that the fMRI-guided predictive models of functional attention network activities can potentially serve as the brain decoders of video shot changes.
    2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI 2014); 04/2014

Full-text (2 Sources)

Available from
Jun 2, 2014