Article

Output of a neuronal population code

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the population coding framework, we consider how the response distributions affect output distribution. A general theory for the output of neuronal population code is presented when the spike train is a renewal process. Under a given condition on the response distribution, the most probable value of the output distribution is the center of input-preferred values, whereas in the other cases the most improbable value of the output distribution is the center of input-preferred values or there are no most probable states. Depending on the exact form of the response distributions, the variance of the output distributions can either enlarge or reduce the tuning width of the tuning curves.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In other words, λ E = 65 Hz is the most important point for the neuron to read out. It is also very interesting to note that neuronal response curves [18] exhibit a similar (bell) shape of input–output relationship as infigure 9. Therefore our results developed here can also be applied to population coding theory. ...
... The IF model is the simplest model in theoretical neuroscience, which reflects certain aspects of a real neuron and provides us with some clues on how a real neuron operates [2,11]. It would be interesting to apply our conclusions to more biologically realistic models, to the framework of population coding theory [18,20] and to models with correlated inputs [16,32,38]. The most common quantity about information used in neuroscience is the Shannon information. ...
Article
A neuron extensively receives both inhibitory and excitatory inputs. What is the ratio r between these two types of input so that the neuron could most accurately read out input information (rate)? We explore the issue in the present paper provided that the neuron is an ideal observer--decoding the input information with the attainment of the Cramer-Rao inequality bound. It is found that in general adding certain amounts of inhibitory inputs to a neuron improves its capability of accurately decoding the input information. By calculating the Fisher information of an integrate-and-fire neuron, we determine the optimal ratio r for decoding the input information from an observation of the e#erent interspike intervals. Surprisingly the Fisher information could be zero for certain values of the ratio, seemingly implying that it is impossible to read out the encoded information at these values. By analyzing the maximum likelihood estimate of the input information, it is then concluded that the input information is in fact most easily estimated at the points where the Fisher information vanishes. 1
... We refer the reader to [15] for a more complete and biologically oriented formulation of synaptic inputs. The model deÿned by Eq. (1) is called the integrate-and-ÿre model23478912,13,15]. In the sequel, we deÿne T (; r; ) = inf {t: v t ¿ V thre } as the ÿring time (interspike intervals) for r ∈ [0; 1]. ...
Article
We )nd that adding certain amounts ofinhibitory inputs to a neuron improves its capability ofaccurately decoding the input inf ormation. The optimal ratio r ofinhibitory to excitatory inputs for decoding the input information from an observation of the e,erent interspike intervals is calculated. Surprisingly, the Fisher information could be zero for certain values of the ratio, seemingly implying that it is impossible to read out the encoded information at these values. By analyzing the maximum likelihood estimate ofthe input inf ormation, it is then concluded that the input information is, in fact, most easily estimated at the points where the Fisher information vanishes. c � 2002 Published by Elsevier Science B.V.
... Whether we can observe it or not in a physiologically plausible parameter region depends on neuronal parameters, or for real neurons, on the environment in which they operate. Since a neuron usually receives a massive excitatory and inhibitory input, we hope our finding may shed new light onto the coding problem [15,18] and suggest another functional role of inhibitory inputs [32], or noise terms in signal inputs. As a by-product, our results also alter another conventional view in theoretical neuroscience: increasing inhibitory inputs results in an increase of the randomness of output spike trains. ...
Article
Increasing inhibitory input to single neuronal models, such as the FitzHughNagumo model and the Hodgkin-Huxley model, can sometimes increase their firing rates, a phenomenon which we termed as inhibition--boosted firing (IBF). Here we consider neuronal models with di#usion approximation inputs, i.e. they share the identical first and second order statistics of the corresponding Poisson process inputs. Using the integrate-and-fire model and IF-FHN model, we explore theoretically how and when IBF can happen. For both models, it is shown that there is a critical input frequency at which the e#erent firing rate is identical when the neuron receives purely excitatory inputs or exactly balanced inhibitory and excitatory inputs. When the input frequency is lower than the critical frequency, IBF occurs. # COGS, Sussex University, BN1 9QH, UK; E-mail: jf218@cam.ac.uk URL: http://www.cus.cam.ac.uk/ jf218 + Department of Mathematics, Hong Kong Baptist University, Hong Kong, China; E-mail: gwei@math.hkbu.edu.hk URL: http://www.gwei.math.hkbu.hk 1 2 1
Article
We assess, both numerically and theoretically, how positively correlated Poisson inputs affect the output of the integrate-and-fire and Hodgkin-Huxley models. For the integrate-and-fire model the variability of efferent spike trains is an increasing function of input correlation, and of the ratio between inhibitory and excitatory inputs. Interestingly for the Hodgkin-Huxley model the variability of efferent spike trains is a decreasing function of input correlation, and for fixed input correlation it is almost independent of the ratio between inhibitory and excitatory inputs. In terms of the signal to noise ratio of efferent spike trains the integrate-and-fire model works better in an environment of asynchronous inputs, but the Hodgkin-Huxley model has an advantage for more synchronous (correlated) inputs. In conclusion the integrate-and-fire and Hodgkin-Huxley models respond to correlated inputs in totally opposite ways.
Article
Abstract Neurons,are typically thought,of as receiving information,primarily,through,excitatory inputs, with inhibitory inputs playing a gating or regulating role. In this paper we demonstrate that increasing the strength,of inhibitory inputs,to the Hodgkin}Huxley and,FitzHugh} Nagumo,models,can induce them,to "re faster. This result is counter-intuitive and,important in neural network,modelling,where,inhibitory inputs are often neglected.,2001 Elsevier Science B.V. All rights reserved. Keywords: Inhibitory input; The Hodgkin}Huxley model; The FitzHugh}Nagumo,model;
Article
Full-text available
The idea that neurons might use stochastic resonance (SR) to take advantage of random signals has been extensively discussed in the literature. However, there are a few key issues that have not been clarified and thus it is difficult to assess that whether SR in neuronal models occurs inside plausible physiology parameter regions or not. We propose and show that neurons can adjust correlations between synaptic inputs, which can be measured in experiments and are dynamical variables, to exhibit SR. The benefit of such a mechanism over the conventional SR is also discussed.
Article
We assess, both numerically and theoretically, how positively correlated Poisson inputs affect the output of the integrate-and-fire and Hodgkin-Huxley models. For the integrate-and-fire model the variability of efferent spike trains is an increasing function of input correlation, and of the ratio between inhibitory and excitatory inputs. Interestingly for the Hodgkin-Huxley model the variability of efferent spike trains is a decreasing function of input correlation, and for fixed input correlation it is almost independent of the ratio between inhibitory and excitatory inputs. In terms of the signal to noise ratio of efferent spike trains the integrate-and-fire model works better in an environment of asynchronous inputs, but the Hodgkin-Huxley model has an advantage for more synchronous (correlated) inputs. In conclusion the integrate-and-fire and Hodgkin-Huxley models respond to correlated inputs in totally opposite ways.
Article
For the integrate-and-fire model and the Hodgkin--Huxley model, we consider how current inputs including #-wave and square-wave affect their outputs. Firstly the usual approximation is employed to approximate the models with current inputs which quantitatively reveals the difference between instantaneous and noninstantaneous (current) inputs. When the rising time of #-wave inputs is long or the ratio between the inhibitory and excitatory inputs is close to one, the usual approximation fails to approximate the #-wave inputs in the integrate-and-fire model. For the Hodgkin--Huxley model, the usual approximation in general gives an unsatisfying approximation. A novel approach based upon a superposition of `coloured' and `white' noise is then proposed to replace the usual approximation. Numerical results show that the novel approach substantially improves the approximation within widely physiologically reasonable regions of the rising time of #-wave inputs. PACS numbers: 0540J, 0250, 0590, 8435, 8719L 1.
Article
Full-text available
A computational model is described in which the sizes of variables are represented by the explicit times at which action potentials occur, rather than by the more usual 'firing rate' of neurons. The comparison of patterns over sets of analogue variables is done by a network using different delays for different information paths. This mode of computation explains how one scheme of neuroarchitecture can be used for very different sensory modalities and seemingly different computations. The oscillations and anatomy of the mammalian olfactory systems have a simple interpretation in terms of this representation, and relate to processing in the auditory system. Single-electrode recording would not detect such neural computing. Recognition 'units' in this style respond more like radial basis function units than elementary sigmoid units.
Article
Full-text available
The accuracy with which listeners can locate sounds is much greater than the spatial sensitivity of single neurons. The broad spatial tuning of auditory neurons indicates that a code based on the responses of ensembles of neurons, a population code, must be used to determine the position of a sound in space. Here we show that the tuning of neurons to the most potent localization cue, the interaural time difference in low-frequency signals (< approximately 2kHz), becomes sharper as the information ascends through the auditory system. We also show that this sharper tuning increases the efficiency of the population code, in the sense that fewer neurons are required to achieve a given acuity.
Article
Full-text available
Computational neuroscience has contributed significantly to our understanding of higher brain function by combining experimental neurobiology, psychophysics, modeling, and mathematical analysis. This article reviews recent advances in a key area: neural coding and information processing. It is shown that synapses are capable of supporting computations based on highly structured temporal codes. Such codes could provide a substrate for unambiguous representations of complex stimuli and be used to solve difficult cognitive tasks, such as the binding problem. Unsupervised learning rules could generate the circuitry required for precise temporal codes. Together, these results indicate that neural systems perform a rich repertoire of computations based on action potential timing.
Article
Full-text available
We present a general encoding-decoding framework for interpreting the activity of a population of units. A standard population code interpretation method, the Poisson model, starts from a description as to how a single value of an underlying quantity can generate the activities of each unit in the population. In casting it in the encoding-decoding framework, we find that this model is too restrictive to describe fully the activities of units in population codes in higher processing areas, such as the medial temporal area. Under a more powerful model, the population activity can convey information not only about a single value of some quantity but also about its whole distribution, including its variance, and perhaps even the certainty the system has in the actual presence in the world of the entity generating this quantity. We propose a novel method for forming such probabilistic interpretations of population codes and compare it to the existing method.
Article
Full-text available
Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a near-optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.
Article
Full-text available
Sensory and motor variables are typically represented by a population of broadly tuned neurons. A coarser representation with broader tuning can often improve coding accuracy, but sometimes the accuracy may also improve with sharper tuning. The theoretical analysis here shows that the relationship between tuning width and accuracy depends crucially on the dimension of the encoded variable. A general rule is derived for how the Fisher information scales with the tuning width, regardless of the exact shape of the tuning function, the probability distribution of spikes, and allowing some correlated noise between neurons. These results demonstrate a universal dimensionality effect in neural population coding.
Article
Full-text available
We study the impact of correlated neuronal firing rate variability on the accuracy with which an encoded quantity can be extracted from a population of neurons. Contrary to widespread belief, correlations in the variabilities of neuronal firing rates do not, in general, limit the increase in coding accuracy provided by using large populations of encoding neurons. Furthermore, in some cases, but not all, correlations improve the accuracy of a population code.
Article
Input noise, defined as the root mean square of the fluctuations in the input, typically limits the performance of any system in engineering or biology. We show that three different performance measures scale identically as a function of the noise in a simple model of neuronal spiking that has both a voltage and current threshold. These performance measures are: the probability of correctly detecting a constant input in a limited time, the signal-to-noise ratio in response to sinusoidal input, and the mutual information between an arbitrarily varying input and the output spike train of the model neuron. Of these, detecting a constant signal is the simplest and most fundamental quantity. For subthreshold signals, the model exhibits stochastic resonance, a non-zero noise amplitude that optimally enhances signal detection. In this case, noise paradoxically does not limit, but instead improves performance. This resonance arises through the conjunction of two competing mechanisms: the noise-induced linearization (‘dithering’) of the model's firing rate and the increase in the variability of the number of spikes in the output. Even though the noise amplitude dwarfs the signal, detection of a weak constant signal using stochastic resonance is still possible when the signal elicits on average only one additional spike. Stochastic resonance could thus play a role in neurobiological sensory systems, where speed is of the utmost importance and averaging over many individual spikes is not possible.
Article
We consider behaviors of output jitter in the simplest spiking model, the integrate-and-fire model. The full spectrum of behaviors is found: The output jitter is sensitive to the input distribution and can be a constant, diverge to infinity, or converge to zero. Exact formulas for the convergence or the divergence of output jitter are given. Our results suggest that the exponential distribution is the critical case: A faster rate of decrease in the distribution tail as compared to the exponential distribution tail ensures the convergence of output jitter, whereas slower decay in the distribution tail causes the divergence of output jitter.
Article
Input noise, defined as the root mean square of the fluctuations in the input, typically limits the performance of any system in engineering or biology. We show that three different performance measures scale identically as a function of the noise in a simple model of neuronal spiking that has both a voltage and current threshold. These performance measures are: the probability of correctly detecting a constant input in a limited time, the signal-to-noise ratio in response to sinusoidal input, and the mutual information between an arbitrarily varying input and the output spike train of the model neuron. Of these, detecting a constant signal is the simplest and most fundamental quantity. For subthreshold signals, the model exhibits stochastic resonance, a non-zero noise amplitude that optimally enhances signal detection. In this case, noise paradoxically does not limit, but instead improves performance. This resonance arises through the conjunction of two competing mechanisms: the noise-induced linearization (‘dithering’) of the model's firing rate and the increase in the variability of the number of spikes in the output. Even though the noise amplitude dwarfs the signal, detection of a weak constant signal using stochastic resonance is still possible when the signal elicits on average only one additional spike. Stochastic resonance could thus play a role in neurobiological sensory systems, where speed is of the utmost importance and averaging over many individual spikes is not possible.
Article
Quantitative methods for the study of the statistical properties of spontaneously occurring spike trains from single neurons have recently been presented. Such measurements suggest a number of descriptive mathematical models. One of these, based on a random walk towards an absorbing barrier, can describe a wide range of neuronal activity in terms of two parameters. These parameters are readily associated with known physiological mechanisms.
Article
To understand how a single neurone processes information, it is critical to examine the relationship between input and output. Marsalek, Koch and Maunsell's study focused on output jitter (standard deviation of output interpike interval) found that for the integrate-and-fire (I&F) model this response measure converges towards zero as the number of inputs increases indefinitely when interarrival times of excitatory inputs (EPSPs) are normally or uniformly distributed. In this work we present a complete, theoretical investigation, corroborated by numerical simulation, of output jitter in the I&F model with a variety of input distributions and a range of values of number of inputs, N. Our main results are: the exponential distribution input is a critical case and its output jitter is independent of N. For input distributions with tails which decrease faster than the exponential distribution, output jitter converges to zero as discovered by Marsalek, Koch and Maunsell; whereas an input distribution with a more slowly decreasing tail induces divergence of output jitter. Exact formulae for mean firing time are also obtained which enable us to estimate the coefficient of variation. The I&F model with leakage is also briefly considered.
Article
I have examined the performance of a population coding model of visual orientation discrimination, similar to the population coding models proposed for the coding of limb movements. The orientation of the stimulus is not represented by a single unit but by an ensemble of broadly tuned units in a distributed way. Each unit is represented by a vector whose magnitude and direction correspond to the response magnitude and preferred orientation of the unit, respectively. The orientation of the population vector, i.e. the vector sum of the ensemble of units, is the signalled orientation on a particular trial. The accuracy of this population vector orientation coding was determined as a function of a number of parameters by computer simulation. I have shown that even with broadly orientation tuned units possessing considerable response variance, the accuracy of the orientation of the population vector can be as good as behaviorally measured just noticeable differences in orientation. The accuracy of the population code is shown to depend upon the number of units, the average response strength, the orientation band-width, response variability and the response covariance. The results of these simulations were also compared to predictions derived from psychophysical studies of orientation discrimination.
Article
Perception of a visual attribute, such as orientation, is strongly dependent on the context within which a feature is presented, such as that seen in the tilt illusion. The possibility that the neurophysiological basis for this phenomenon may be manifest at the level of cells in striate cortex is suggested by anatomical and physiological observations of orientation dependent long range horizontal connections which relate disparate points in the visual field. This study explores the dependency of the functional properties of single cells on visual context. We observed several influences of the visual field area surrounding cells' receptive field on the properties of the receptive field center: inhibition or facilitation dependent on the orientation of the surround, shifts in orientation preference and changes in the bandwidth of orientation tuning. To relate these changes to perceptual changes in orientation we modeled a neuronal ensemble encoding orientation. Our results show that the filter characteristics of striate cortical cells are not necessarily fixed, but can be dynamic, changing according to context.
Article
The deeper layers of the superior colliculus are involved in the initiation and execution of saccadic (high velocity) eye movements. A large population of coarsely tuned collicular neurons is active before each saccade. The mechanisms by which the signals that precisely control the direction and amplitude of a saccade are extracted from the activity of the population are unknown. It has been assumed that the exact trajectory of a saccade is determined by the activity of the entire population and that information is not extracted from only the most active cells in the population at a subsequent stage of neural processing. The trajectory of a saccade could be based on vector summation of the movement tendencies provided by each member of the population of active neurons or be determined by a weighted average of the vector contributions of each neuron in the active population. Here we present the results of experiments in which a small subset of the active population was reversibly deactivated with lidocaine. These results are consistent with the predictions of the latter population-averaging hypothesis and support the general idea that the direction, amplitude and velocity of saccadic eye movements are based on the responses of the entire population of cells active before a saccadic eye movement.
Article
Although individual neurons in the arm area of the primate motor cortex are only broadly tuned to a particular direction in three-dimensional space, the animal can very precisely control the movement of its arm. The direction of movement was found to be uniquely predicted by the action of a population of motor cortical neurons. When individual cells were represented as vectors that make weighted contributions along the axis of their preferred direction (according to changes in their activity during the movement under consideration) the resulting vector sum of all cell vectors (population vector) was in a direction congruent with the direction of movement. This population vector can be monitored during various tasks, and similar measures in other neuronal populations could be of heuristic value where there is a neural representation of variables with vectorial attributes.
Article
In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models.
Article
Several lines of evidence indicate that brief (< 25 ms) bursts of high-frequency firing have special importance in brain function. Recent work shows that many central synapses are surprisingly unreliable at signaling the arrival of single presynaptic action potentials to the postsynaptic neuron. However, bursts are reliably signaled because transmitter release is facilitated. Thus, these synapses can be viewed as filters that transmit bursts, but filter out single spikes. Bursts appear to have a special role in synaptic plasticity and information processing. In the hippocampus, a single burst can produce long-term synaptic modifications. In brain structures whose computational role is known, action potentials that arrive in bursts provide more-precise information than action potentials that arrive singly. These results, and the requirement for multiple inputs to fire a cell suggest that the best stimulus for exciting a cell (that is, a neural code) is coincident bursts.
Article
What is the relationship between the temporal jitter in the arrival times of individual synaptic inputs to a neuron and the resultant jitter in its output spike? We report that the rise time of firing rates of cells in striate and extrastriate visual cortex in the macaque monkey remain equally sharp at different stages of processing. Furthermore, as observed by others, multiunit recordings from single units in the primate frontal lobe reveal a strong peak in their cross-correlation in the 10-150 msec range with very small temporal jitter (on the order of 1 msec). We explain these results using numerical models to study the relationship between the temporal jitter in excitatory and inhibitory synaptic input and the variability in the spike output timing in integrate-and-fire units and in a biophysically and anatomically detailed model of a cortical pyramidal cell. We conclude that under physiological circumstances, the standard deviation in the output jitter is linearly related to the standard deviation in the input jitter, with a constant of less than one. Thus, the timing jitter in successive layers of such neurons will converge to a small value dictated by the jitter in axonal propagation times.
Article
The correlation of neuronal activity with sensory input and behavioural output has revealed that information is often encoded in the activity of many neurons across a population, that is, a neural population code is used. The possible algorithms that downstream networks use to read out this population code have been studied by manipulating the activity of a few neurons in a population. We have used this approach to study population coding in a small network underlying the leech local bend, a body bend directed away from a touch stimulus. Because of the small size of this network we are able to monitor and manipulate the complete set of sensory inputs to the network. We show here that the population vector formed by the spike counts of the active mechanosensory neurons is well correlated with bend direction. A model based on the known connectivity of the identified neurons in the local bend network can account for our experimental results, and is suitable for reading out the neural population vector. Thus, for the first time to our knowledge, it is possible to link a proposed algorithm for neural population coding with synaptic and network mechanisms in an experimental system.
Article
We consider how the output of the perfect integrate-and-fire (I&F) model of a single neuron is affected by the properties of the input, first of all by the distribution of afferent excitatory and inhibitory postsynaptic potential (EPSP, IPSP) inter-arrival times, discriminating particularly between short- and long-tailed forms, and by the degree of balance between excitation and inhibition (as measured by the ratio, r, between the numbers of inhibitory and excitatory inputs). We find that the coefficient of variation (CV; standard deviation divided by mean) of efferent interspike interval (ISI) is an increasing function of the length of the tail of the distribution of EPSP inter-arrival times and the ratio r. There is a range of values of r in which the CV of output ISIs is between 0.5 and 1. Too tight a balance between EPSPs and IPSPs will cause the model to produce a CV outside the interval considered to correspond to the physiological range. Going to the extreme, an exact balance between EPSPs and IPSPs as considered in [24] ensures a long-tailed ISI output distribution for which the moments such as mean and variance cannot be defined. In this case it is meaningless to consider quantities like output jitter, CV, etc. of the efferent ISIs. The longer the tail of the input inter-arrival time distribution, the less is the requirement for balance between EPSPs and IPSPs in order to evoke output spike trains with a CV between 0.5 and 1. For a given short-tailed input distribution, the range of values of r in which the CV of efferent ISIs is between 0.5 and 1 is almost completely inside the range in which output jitter (standard deviation of efferent ISI) is greater than input jitter. Only when the CV is smaller than 0.5 or the input distribution is a long-tailed one is output less than input jitter [21]. The I&F model tends to enlarge low input jitter and reduce high input jitter. We also provide a novel theoretical framework, based upon extreme value theory in statistics, for estimating output jitter, CV and mean firing time.
  • A P Georgopoulos
  • A B Schwartz
  • R E Kettner
A. P. Georgopoulos, A. B. Schwartz, and R. E. Kettner, Science 233. 1416(1986).
  • E Zohary
E. Zohary. Biol. Cybern. 66, 262 (1992).
  • K C Zhang
  • T Sejnowski
K. C. Zhang and T. Sejnowski, Neural Comput. 11,75 (1998).
  • J Feng
  • D Brown
J. Feng and D. Brown, Biol. Cybern. 78, 369 (1998).
  • D C R Filzpatrick
  • R S Baira
  • S Terrence
  • Kuwada
D. C. Filzpatrick. R. Baira, R. S. Terrence, and S. Kuwada, Nature (London) 388, 871 (1997).
  • J Feng
J. Feng, Phys. Rev. Lett. 79, 4505 (1997).
  • L Abbott
  • P Dayan
L. Abbott and P. Dayan, Neural Comput. 11, 91 (1998).
  • J J Hopneld
J. J. Hopneld, Nature (London) 376, 33 (1995).
  • B Mandelbrot
C. L, Gerstein and B. Mandelbrot, Biophys. J. 4, 41 (1964).
  • C W H Lee
  • D L Rohrer
  • Sparks
C. Lee. W. H. Rohrer. and D. L. Sparks, Nature (London) 332, 357(1988).
  • C D Gilbert
C. D. Gilbert and T. N. Wiesel, Vision Res. 30, 1689 (1990).