[show abstract][hide abstract] ABSTRACT: SENSORY PROCESSING IN THE BRAIN INCLUDES THREE KEY OPERATIONS: multisensory integration-the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations-the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned-but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.
[show abstract][hide abstract] ABSTRACT: Most voluntary actions rely on neural circuits that map sensory cues onto appropriate motor responses. One might expect that for everyday movements, like reaching, this mapping would remain stable over time, at least in the absence of error feedback. Here we describe a simple and novel psychophysical phenomenon in which recent experience shapes the statistical properties of reaching, independent of any movement errors. Specifically, when recent movements are made to targets near a particular location subsequent movements to that location become less variable, but at the cost of increased bias for reaches to other targets. This process exhibits the variance-bias tradeoff that is a hallmark of Bayesian estimation. We provide evidence that this process reflects a fast, trial-by-trial learning of the prior distribution of targets. We also show that these results may reflect an emergent property of associative learning in neural circuits. We demonstrate that adding Hebbian (associative) learning to a model network for reach planning leads to a continuous modification of network connections that biases network dynamics toward activity patterns associated with recent inputs. This learning process quantitatively captures the key results of our experimental data in human subjects, including the effect that recent experience has on the variance-bias tradeoff. This network also provides a good approximation of a normative Bayesian estimator. These observations illustrate how associative learning can incorporate recent experience into ongoing computations in a statistically principled way.
Journal of Neuroscience 07/2011; 31(27):10050-9. · 6.91 Impact Factor
[show abstract][hide abstract] ABSTRACT: We have recently shown that the statistical properties of goal directed
reaching in human subjects depends on recent experience in a way that is
consistent with the presence of adaptive Bayesian priors (Verstynen and Sabes,
2011). We also showed that when Hebbian (associative) learning is added to a
simple line-attractor network model, the network provides both a good account
of the experimental data and a good approximation to a normative Bayesian
estimator. This latter conclusion was based entirely on empirical simulations
of the network model. Here we study the effects of Hebbian learning on the
line-attractor model using a combination of analytic and computational
approaches. Specifically, we find an approximate solution to the network
steady-state. We show numerically that the solution approximates Bayesian
estimation. We next show that the solution contains two opposing terms: one
that depends on the distribution of recent network activity and one that
depends on the current network inputs. These results provide additional
intuition for why Hebbian learning mimics adaptive Bayesian estimation in this
[show abstract][hide abstract] ABSTRACT: The planning and control of sensory-guided movements requires the integration of multiple sensory streams. Although the information conveyed by different sensory modalities is often overlapping, the shared information is represented differently across modalities during the early stages of cortical processing. We ask how these diverse sensory signals are represented in multimodal sensorimotor areas of cortex in macaque monkeys. Although a common modality-independent representation might facilitate downstream readout, previous studies have found that modality-specific representations in multimodal cortex reflect upstream spatial representations. For example, visual signals have a more eye-centered representation. We recorded neural activity from two parietal areas involved in reach planning, area 5 and the medial intraparietal area (MIP), as animals reached to visual, combined visual and proprioceptive, and proprioceptive targets while fixing their gaze on another location. In contrast to other multimodal cortical areas, the same spatial representations are used to represent visual and proprioceptive signals in both area 5 and MIP. However, these representations are heterogeneous. Although we observed a posterior-to-anterior gradient in population responses in parietal cortex, from more eye-centered to more hand- or body-centered representations, we do not observe the simple and discrete reference frame representations suggested by studies that focused on identifying the "best-match" reference frame for a given cortical area. In summary, we find modality-independent representations of spatial information in parietal cortex, although these representations are complex and heterogeneous.
Journal of Neuroscience 05/2011; 31(18):6661-73. · 6.91 Impact Factor
[show abstract][hide abstract] ABSTRACT: Although multisensory integration has been well modeled at the behavioral level, the link between these behavioral models and the underlying neural circuits is still not clear. This gap is even greater for the problem of sensory integration during movement planning and execution. The difficulty lies in applying simple models of sensory integration to the complex computations that are required for movement control and to the large networks of brain areas that perform these computations. Here I review psychophysical, computational, and physiological work on multisensory integration during movement planning, with an emphasis on goal-directed reaching. I argue that sensory transformations must play a central role in any modeling effort. In particular, the statistical properties of these transformations factor heavily into the way in which downstream signals are combined. As a result, our models of optimal integration are only expected to apply "locally," that is, independently for each brain area. I suggest that local optimality can be reconciled with globally optimal behavior if one views the collection of parietal sensorimotor areas not as a set of task-specific domains, but rather as a palette of complex, sensorimotor representations that are flexibly combined to drive downstream activity and behavior.
Progress in brain research 01/2011; 191:195-209. · 4.19 Impact Factor
[show abstract][hide abstract] ABSTRACT: The sensory signals that drive movement planning arrive in a variety of 'reference frames', and integrating or comparing them requires sensory transformations. We propose a model in which the statistical properties of sensory signals and their transformations determine how these signals are used. This model incorporates the patterns of gaze-dependent errors that we found in our human psychophysics experiment when the sensory signals available for reach planning were varied. These results challenge the widely held ideas that error patterns directly reflect the reference frame of the underlying neural representation and that it is preferable to use a single common reference frame for movement planning. We found that gaze-dependent error patterns, often cited as evidence for retinotopic reach planning, can be explained by a transformation bias and are not exclusively linked to retinotopic representations. Furthermore, the presence of multiple reference frames allows for optimal use of available sensory information and explains task-dependent reweighting of sensory signals.
[show abstract][hide abstract] ABSTRACT: Visuomotor coordination requires both the accurate alignment of spatial information from different sensory streams and the ability to convert these sensory signals into accurate motor commands. Both of these processes are highly plastic, as illustrated by the rapid adaptation of goal-directed movements following exposure to shifted visual feedback. Although visual-shift adaptation is a widely used model of sensorimotor learning, the multifaceted adaptive response is typically poorly quantified. We present an approach to quantitatively characterizing both sensory and task-dependent components of adaptation. Sensory aftereffects are quantified with "alignment tests" that provide a localized, two-dimensional measure of sensory recalibration. These sensory effects obey a precise form of "additivity," in which the shift in sensory alignment between vision and the right hand is equal to the vector sum of the shifts between vision and the left hand and between the right and left hands. This additivity holds at the exposure location and at a second generalization location. These results support a component transformation model of sensory coordination, in which eye-hand and hand-hand alignment relies on a sequence of shared sensory transformations. We also ask how these sensory effects compare with the aftereffects measured in target reaching and tracking tasks. We find that the aftereffect depends on both the task performed during feedback-shift exposure and on the testing task. The results suggest the presence of both a general sensory recalibration and task-dependent sensorimotor effect. The task-dependent effect is observed in highly stereotyped reaching movements, but not in the more variable tracking task.
Journal of Neurophysiology 12/2007; 98(5):2827-41. · 3.30 Impact Factor
[show abstract][hide abstract] ABSTRACT: The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.
Journal of Neurophysiology 05/2007; 97(4):3057-69. · 3.30 Impact Factor
[show abstract][hide abstract] ABSTRACT: Recent studies have employed simple linear dynamical systems to model trial-by-trial dynamics in various sensorimotor learning tasks. Here we explore the theoretical and practical considerations that arise when employing the general class of linear dynamical systems (LDS) as a model for sensorimotor learning. In this framework, the state of the system is a set of parameters that define the current sensorimotor transformation-the function that maps sensory inputs to motor outputs. The class of LDS models provides a first-order approximation for any Markovian (state-dependent) learning rule that specifies the changes in the sensorimotor transformation that result from sensory feedback on each movement. We show that modeling the trial-by-trial dynamics of learning provides a substantially enhanced picture of the process of adaptation compared to measurements of the steady state of adaptation derived from more traditional blocked-exposure experiments. Specifically, these models can be used to quantify sensory and performance biases, the extent to which learned changes in the sensorimotor transformation decay over time, and the portion of motor variability due to either learning or performance variability. We show that previous attempts to fit such models with linear regression have not generally yielded consistent parameter estimates. Instead, we present an expectation-maximization algorithm for fitting LDS models to experimental data and describe the difficulties inherent in estimating the parameters associated with feedback-driven learning. Finally, we demonstrate the application of these methods in a simple sensorimotor learning experiment: adaptation to shifted visual feedback during reaching.
[show abstract][hide abstract] ABSTRACT: When planning target-directed reaching movements, human subjects combine visual and proprioceptive feedback to form two estimates of the arm's position: one to plan the reach direction, and another to convert that direction into a motor command. These position estimates are based on the same sensory signals but rely on different combinations of visual and proprioceptive input, suggesting that the brain weights sensory inputs differently depending on the computation being performed. Here we show that the relative weighting of vision and proprioception depends both on the sensory modality of the target and on the information content of the visual feedback, and that these factors affect the two stages of planning independently. The observed diversity of weightings demonstrates the flexibility of sensory integration and suggests a unifying principle by which the brain chooses sensory inputs so as to minimize errors arising from the transformation of sensory signals between coordinate frames.
[show abstract][hide abstract] ABSTRACT: When planning goal-directed reaches, subjects must estimate the position of the arm by integrating visual and proprioceptive signals from the sensory periphery. These integrated position estimates are required at two stages of motor planning: first to determine the desired movement vector, and second to transform the movement vector into a joint-based motor command. We quantified the contributions of each sensory modality to the position estimate formed at each planning stage. Subjects made reaches in a virtual reality environment in which vision and proprioception were dissociated by shifting the location of visual feedback. The relative weighting of vision and proprioception at each stage was then determined using computational models of feedforward motor control. We found that the position estimate used for movement vector planning relies mostly on visual input, whereas the estimate used to compute the joint-based motor command relies more on proprioceptive signals. This suggests that when estimating the position of the arm, the brain selects different combinations of sensory input based on the computation in which the resulting estimate will be used.
Journal of Neuroscience 09/2003; 23(18):6982-92. · 6.91 Impact Factor
[show abstract][hide abstract] ABSTRACT: 1 A Little Background 1.1 Singular values and matrix inversion For non-symmetric matrices, the eigenvalues and singular values are not equivalent. However, they share one important property: Fact 1 A matrix A, NxN, is invertible iff all of its singular values are non-zero.