Article

An introductory review of information theory in the context of computational neuroscience.

Institute for Telecommunications Research, University of South Australia.
Biological Cybernetics (Impact Factor: 1.93). 07/2011; 105(1):55-70. DOI: 10.1007/s00422-011-0451-9
Source: DBLP

ABSTRACT This article introduces several fundamental concepts in information theory from the perspective of their origins in engineering. Understanding such concepts is important in neuroscience for two reasons. Simply applying formulae from information theory without understanding the assumptions behind their definitions can lead to erroneous results and conclusions. Furthermore, this century will see a convergence of information theory and neuroscience; information theory will expand its foundations to incorporate more comprehensively biological processes thereby helping reveal how neuronal networks achieve their remarkable information processing abilities.

2 Followers
 · 
290 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A central problem in motor control is understanding how the many biomechanical degrees of freedom are coordinated to achieve a common goal. An especially puzzling aspect of coordination is that behavioral goals are achieved reliably and repeatedly with movements rarely reproducible in their detail. Existing theoretical frameworks emphasize either goal achievement or the richness of motor variability, but fail to reconcile the two. Here we propose an alternative theory based on stochastic optimal feedback control. We show that the optimal strategy in the face of uncertainty is to allow variability in redundant (task-irrelevant) dimensions. This strategy does not enforce a desired trajectory, but uses feedback more intelligently, correcting only those deviations that interfere with task goals. From this framework, task-constrained variability, goal-directed corrections, motor synergies, controlled parameters, simplifying rules and discrete coordination modes emerge naturally. We present experimental results from a range of motor tasks to support this theory.
    Nature Neuroscience 12/2002; 5(11):1226-35. DOI:10.1038/nn963 · 14.98 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In control, stability captures the reproducibility of motions and the robustness to environmental and internal perturbations. This paper examines how stability can be evaluated in human movements, and possible mechanisms by which humans ensure stability. First, a measure of stability is introduced, which is simple to apply to human movements and corresponds to Lyapunov exponents. Its application to real data shows that it is able to distinguish effectively between stable and unstable dynamics. A computational model is then used to investigate stability in human arm movements, which takes into account motor output variability and computes the force to perform a task according to an inverse dynamics model. Simulation results suggest that even a large time delay does not affect movement stability as long as the reflex feedback is small relative to muscle elasticity. Simulations are also used to demonstrate that existing learning schemes, using a monotonic antisymmetric update law, cannot compensate for unstable dynamics. An impedance compensation algorithm is introduced to learn unstable dynamics, which produces similar adaptation responses to those found in experiments.
    Biological Cybernetics 01/2006; 94:20-32. DOI:10.1007/s00422-005-0025-9 · 1.93 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannon's mutual information and Fisher information, and the optimality of Jeffrey's prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.
    Physical Review Letters 09/2008; 101(5):058103. DOI:10.1103/PhysRevLett.101.058103 · 7.73 Impact Factor

Full-text (2 Sources)

Download
77 Downloads
Available from
Jun 4, 2014