The relationship between optimal and biologically plausible decoding of stimulus velocity in the retina

Trinity Centre for Bioengineering and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland.
Journal of the Optical Society of America A (Impact Factor: 1.56). 11/2009; 26(11):B25-42. DOI: 10.1364/JOSAA.26.000B25
Source: PubMed


A major open problem in systems neuroscience is to understand the relationship between behavior and the detailed spiking properties of neural populations. We assess how faithfully velocity information can be decoded from a population of spiking model retinal neurons whose spatiotemporal receptive fields and ensemble spike train dynamics are closely matched to real data. We describe how to compute the optimal Bayesian estimate of image velocity given the population spike train response and show that, in the case of global translation of an image with known intensity profile, on average the spike train ensemble signals speed with a fractional standard deviation of about 2% across a specific set of stimulus conditions. We further show how to compute the Bayesian velocity estimate in the case where we only have some a priori information about the (naturalistic) spatial correlation structure of the image but do not know the image explicitly. As expected, the performance of the Bayesian decoder is shown to be less accurate with decreasing prior image information. There turns out to be a close mathematical connection between a biologically plausible "motion energy" method for decoding the velocity and the Bayesian decoder in the case that the image is not known. Simulations using the motion energy method and the Bayesian decoder with unknown image reveal that they result in fractional standard deviations of 10% and 6%, respectively, across the same set of stimulus conditions. Estimation performance is rather insensitive to the details of the precise receptive field location, correlated activity between cells, and spike timing.

Full-text preview

Available from:
  • [Show abstract] [Hide abstract]
    ABSTRACT: We perceive visual stimuli even in the presence of incessant image motion due to fixational eye movements. We would like to better understand how these movements affect the percep- tion of visual stimuli, without any assumed knowledge of trajectory of the eye during a given fixation (Pitkow et al., 2007; Rucci et al., 2007; Rucci, 2008; Pfau et al., 2009; Burak et al., 2009). The challenge lies in the estimation of the image given noisy, spatiotemporally-filtered retinal ganglion cell responses, without detailed knowledge of the true eye path on any given trial. To approach the problem, we construct an extended hidden Markov model (HMM), with eye position included as a hidden Markovian state variable. Retinal responses are modeled using a generalized linear model approach (Pillow et al., 2008; Pfau et al., 2009; Lalor et al., 2009) which is sufficiently general to incorporate realisticspatiotemporal filtering as well as auto- and cross-correlations between spike trains. We develop an expectation-maximization (EM) approach to infer the eye path and underlying image simultaneously: in this setting the expectation (E) step corresponds to the inference of the eye path, given a fixed estimated image, and the maximization (M) step corresponds to the inference of the image, given the posterior distribution on the eye path. We use a sequential Monte Carlo ("particle filtering") method (Doucet et al., 2000) to carry out the E step, and employ a computationally-efficient concave optimization approach (Pillow et al., 2009; Paninski et al., 2009; Lalor et al., 2009) to compute the maximum a posteriori (MAP) estimate of the image in the M step. This EM method turns out to be significantly more stable and accurate than the computationally- intensive mixture-of-Gaussian filter approach developed in (Pfau et al., 2009), and may also be potentially applied to correct eye-movement artifacts in the estimation of receptive fields in visual sensory neurophysiology experiments.
    No preview · Article ·
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The estimation of visual motion has long been studied as a paradigmatic neural computation, and multiple models have been advanced to explain behavioral and neural responses to motion signals. A broad class of models, originating with the Reichardt correlator model, proposes that animals estimate motion by computing a temporal cross-correlation of light intensities from two neighboring points in visual space. These models provide a good description of experimental data in specific contexts but cannot explain motion percepts in stimuli lacking pairwise correlations. Here, we develop a theoretical formalism that can accommodate diverse stimuli and behavioral goals. To achieve this, we treat motion estimation as a problem of Bayesian inference. Pairwise models emerge as one component of the generalized strategy for motion estimation. However, correlation functions beyond second order enable more accurate motion estimation. Prior expectations that are asymmetric with respect to bright and dark contrast use correlations of both even and odd orders, and we show that psychophysical experiments using visual stimuli with symmetric probability distributions for contrast cannot reveal whether the subject uses odd-order correlators for motion estimation. This result highlights a gap in previous experiments, which have largely relied on symmetric contrast distributions. Our theoretical treatment provides a natural interpretation of many visual motion percepts, indicates that motion estimation should be revisited using a broader class of stimuli, demonstrates how correlation-based motion estimation is related to stimulus statistics, and provides multiple experimentally testable predictions.
    Full-text · Article · Aug 2011 · Proceedings of the National Academy of Sciences