Article

Segregation of object and background motion in the retina

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

An important task in vision is to detect objects moving within a stationary scene. During normal viewing this is complicated by the presence of eye movements that continually scan the image across the retina, even during fixation. To detect moving objects, the brain must distinguish local motion within the scene from the global retinal image drift due to fixational eye movements. We have found that this process begins in the retina: a subset of retinal ganglion cells responds to motion in the receptive field centre, but only if the wider surround moves with a different trajectory. This selectivity for differential motion is independent of direction, and can be explained by a model of retinal circuitry that invokes pooling over nonlinear interneurons. The suppression by global image motion is probably mediated by polyaxonal, wide-field amacrine cells with transient responses. We show how a population of ganglion cells selective for differential motion can rapidly flag moving objects, and even segregate multiple moving objects.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Modeling spikes is important since RGCs may encode visual stimuli using the relative timing of spikes, which therefore play an important role in sensory transmission [21]. Furthermore, these models have not been shown to capture more complex retinal processes, such as stimulusomission [22,23] and motion-anticipation [24] responses or differential motion tuning [25]. Most importantly, and pivotal to the compression versus prediction dichotomy, it remains unclear to what extent a spiking model optimized for prediction compares to the retina. ...
... Similar to these neurons, we found units in the model (7.66% of units) that did not respond to a stationary grating of high spatial frequency (for each unit's preferred orientation), yet fired as soon as the grating moved (Fig. 4b). Some RGCs, known as object-motion-sensitive (OMS) cells, exhibit differential motion selectivity, where a neuron remains silent when an image (or grating) moves across the retina, yet fires if the motion in the RF center differs from that in the wider surround (e.g. by masking out the wider surround) [25,52,62,63]. We also found such OMS tuning in some of the model units (Fig. 4c). ...
... Our model exhibited several retinal phenomena unaccounted for by other normative studies of the retina. This includes finding certain units tuned to the direction and orientation of moving gratings, as has similarly been reported in the retina [74]; units that exclusively fire to high-spatial-frequency gratings only if they move, like Y-type cells in the retina [20]; units tuned for the differential motion of objects versus their background [25]; units tuned for the anticipation of moving objects [24]; and units tuned for stimulus omission within a temporal sequence of flashing lights [22,23]. Certain complex retinal phenomena (like omission and anticipation responses) have been shown to emerge in non-spiking encoding models of the retina [75,76]. ...
Preprint
Full-text available
The retina's role in visual processing has been viewed as two extremes: an efficient compressor of incoming visual stimuli akin to a camera, or as a predictor of future stimuli. Addressing this dichotomy, we developed a biologically-detailed spiking retinal model trained on natural movies under metabolic-like constraints to either encode the present or to predict future scenes. Our findings reveal that when optimized for efficient prediction approximately 100 ms into the future, the model not only captures retina-like receptive fields and their mosaic-like organizations, but also exhibits complex retinal processes such as latency coding, motion anticipation, differential tuning, and stimulus-omission responses. Notably, the predictive model also more accurately predicts the way retinal ganglion cells respond across different animal species to natural images and movies. Our findings demonstrate that the retina is not merely a compressor of visual input, but rather is fundamentally organized to provide the brain with foresight into the visual world.
... Gap junctions between RGCs and ACs may help to filter and reduce temporal noise, thereby enhancing or decreasing the likelihood of firing in coupled RGCs and ACs for a brief period. This process can modulate the output signals of the RGCs to the brain (52)(53)(54). ACs provide feedback inhibition, surround inhibition, adaptation, signal averaging, and noise reduction (54,55). They also help shape visual processing by computing local contrast in the circuit formed around each RGC type. ...
... This process can modulate the output signals of the RGCs to the brain (52)(53)(54). ACs provide feedback inhibition, surround inhibition, adaptation, signal averaging, and noise reduction (54,55). They also help shape visual processing by computing local contrast in the circuit formed around each RGC type. ...
Article
Full-text available
Gap junctions are channels that allow for direct transmission of electrical signals between cells. However, the ability of one cell to be impacted or controlled by other cells through gap junctions remains unclear. In this study, heterocellular coupling between ON α retinal ganglion cells (RGCs) and displaced amacrine cells (ACs) in the mouse retina was utilized as a model. The impact of the extent of coupling of interconnected ACs on the synchronized firing between coupled ON α RGC-ACs pairs was investigated. It was observed that the synchronized firing between the ON α RGC-ACs pairs was increased by the dopamine 1 receptor antagonist SCH23390, while it was eradicated by the agonist SKF38393. Subsequently, coupled ON α RGC-AC pairs were infected with the channelrhodopsin-2(ChR2) mutation L132C. The spikes of ON α RGCs (without ChR2) could be triggered by ACs (with ChR2) through the gap junction, and vice versa. Furthermore, it was observed that ON α RGCs stimulated with 3-10 Hz currents by whole-cell patch could elicit synchronous spikes in the coupled ACs, and vice versa. The study implies that the synchronized firing between ON α RGC-AC pairs could potentially be affected by the coupling of interconnected ACs, and another cell type could selectively control the firing of one cell type, and information could be forcefully transmitted. The key role of gap junctions in synchronizing firing and driving cells between α RGCs and coupled ACs in the mouse retina is highlighted.
... When an object moves in the world, the image that is cast onto the retina has retinal motion both due to the object's motion as well as the fixational eye motion. To properly perceive an object's motion in the world, the visual system is tasked with disentangling the object's motion from the fixational eye motion (10,11). Normally, the visual system performs this task exceptionally well; humans are able to reliably perceive world-fixed objects as stable and can identify moving objects within it with hyperacuity (1, 3). ...
... The neural underpinning for detection of these stimuli in the presence of incessant eye movements may lie in the Object Motion Sensing (OMS) ganglion cells. Yet to be found in primates, this class of retinal ganglion cells is very effective at identifying an object that is moving differently than the surround (11). But despite increasing knowledge of the neural systems that underlie our ability to perceive a stable and moving world (22), what remains unclear is how two objects that move in a direction consistent with retinal slip but with different velocities can be rendered in the percept to be fixed relative to each other. ...
Preprint
Full-text available
Motion perception is considered a hyperacuity. The presence of a visual frame of reference to compute relative motion is necessary to achieve this sensitivity [Legge, Gordon E., and F. W. Campbell. 'Displacement detection in human vision.' Vision Research 21.2 (1981): 205-213.]. However, there is a special condition where humans are unable to accurately detect relative motion: images moving in a direction consistent with retinal slip where the motion is unnaturally amplified can, under some conditions, appear stable [Arathorn, David W., et al. 'How the unstable eye sees a stable and moving world.' Journal of Vision 13.10.22 (2013)]. In this study, we asked: Is world-fixed retinal image background content necessary for the visual system to compute the direction of eye motion to render in the percept images moving with amplified slip as stable? Or, are non-visual cues sufficient? Subjects adjusted the parameters of a stimulus moving in a random trajectory to match the perceived motion of images moving contingent to the retina. Experiments were done with and without retinal image background content. The perceived motion of stimuli moving with amplified retinal slip was suppressed in the presence of visual content; however, higher magnitudes of motion were perceived under conditions with no visual cues. Our results demonstrate that the presence of retinal image background content is essential for the visual system to compute its direction of motion. The visual content that might be thought to provide a strong frame of reference to detect amplified retinal slips, instead paradoxically drives the misperception of relative motion.
... Even at the earliest stages of visual processing, the retina performs nonlinear computations to encode essential aspects of the visual scene. Retinal networks are flexible enough to encode a wide variety of complex stimulus features, such as object motion [31,32], motion reversals [33][34][35], and omitted stimuli [36]. These early computations support efficient downstream readout by throwing away redundant information and preserving features that facilitate perception. ...
... These models explain a wide array of complex retinal computations (e.g. motion onset [58], omitted stimulus response [36] , background vs object motion [32], reversal response [33][34][35]). As a complement to that, BCs have diverging projections onto multiple RGCs on the retina [63]. ...
Preprint
Full-text available
Everything that the brain sees must first be encoded by the retina, which maintains a reliable representation of the visual world in many different, complex natural scenes while also adapting to stimulus changes. Decomposing the population code into independent and cell-cell interactions reveals how broad scene structure is encoded in the adapted retinal output. By recording from the same retina while presenting many different natural movies, we see that the population structure, characterized by strong interactions, is consistent across both natural and synthetic stimuli. We show that these interactions contribute to encoding scene identity. We also demonstrate that this structure likely arises in part from shared bipolar cell input as well as from gap junctions between retinal ganglion cells and amacrine cells.
... 20,46 One successful approach directly tests computational models of hypothetical neurons proposed to perform a function against interneuron recordings. 18 Our approach generalizes this strategy across a wide range of stimuli and interneurons, yielding an automatic approach to hypothesizing specific roles for interneurons in the response to any stimulus. Guided by an attribution analysis that reveals the internal model states (Figure 8), a future goal will be defining a minimal sufficient stimulus set, including natural images, movies, and potentially artificial stimuli that engage all circuit properties, thus efficiently generating a model that captures all nonlinear properties and phenomena of the retinal circuit. ...
... Pixel regions of 50 × 50 size were then selected from each image at a random location without spatial averaging for presentation. Images drifted in two dimensions in a random walk, 18 moving with a standard deviation of 0.5 pixels per video frame horizontally and vertically. The image also abruptly changed in a single frame to a different location every one second, representing saccades, although such transitions did not contain a sweeping shift in the image. ...
Article
Full-text available
Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model's internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.
... When animals capture prey, they must detect the initiation or emergence of the prey and compute the motion trajectory to estimate the prey's location at the next moment. Intriguingly, there are retinal ganglion cell types detect motion information: looming detectors to signal approaching predators (Münch et al., 2009;Kim et al., 2020), object motion detectors (Olveczky, Baccus and Meister, 2003;Baccus et al., 2008), detectors for onset (Chen et al., 2013;Liu et al., 2021) and changes in motion trajectory (Schwartz et al., 2007). Retinal circuits have mechanisms to compute motion trajectories (Leonardo and Meister, 2013) and anticipate future prey movement (Berry et al., 1999). ...
... Motion responses in several types of retinal ganglion cells are modulated by stimulation outside the receptive field (Mcilwain, 1964;Werblin, 1972). Notably, specific types of retinal ganglion cells, called object motion detectors, are inhibited by global motion that covers the far-surround receptive field, and detect local motion segregated from the background (Olveczky, Baccus and Meister, 2003;Baccus et al., 2008). In the rabbit retina, specific ganglion cell types receive strong suppression during rapid global shifts, mimicking saccadic eye movements, which improves the signal-to-noise ratio by elevating the response threshold (Roska and Werblin, 2003). ...
Article
The retinal neuronal circuit is the first stage of visual processing in the central nervous system. The efforts of scientists over the last few decades indicate that the retina is not merely an array of photosensitive cells, but also a processor that performs various computations. Within a thickness of only ~200µm, the retina consists of diverse forms of neuronal circuits, each of which encodes different visual features. Since the discovery of direction-selective cells by Horace Barlow and Richard Hill, the mechanisms that generate direction selectivity in the retina have remained a fascinating research topic. This review provides an overview of recent advances in our understanding of direction-selectivity circuits. Beyond the conventional wisdom of direction selectivity, emerging findings indicate that the retina utilizes complicated and sophisticated mechanisms in which excitatory and inhibitory pathways are involved in the efficient encoding of motion information. As will become evident, the discovery of computational motifs in the retina facilitates an understanding of how sensory systems establish feature selectivity.
... Beyond the DS circuit, there are many other RGC types that could rely on BC rDS. In mammals, several RGC and AC types are object motion sensitive, responding specifically to local motion or differential motion 4,6,15,70,76,87,92,[101][102][103] , including several prominent primate RGC types, such as parasol RGCs 63,88 . In salamander, a large class of Off ganglion cells prefers motion originating in the RF center compared to motion passing through, which was termed an "alert response to motion onset" and whose responses are best predicted when accounting for the space-time RFs of BCs 56 . ...
... At the same time, the BC radial direction detector is rather insensitive to the type of scene motion that occurs when the body, head and eyes smoothly move. This dichotomy allows for detection of behaviorally-relevant moving objects 15 . It is striking that this essential visual information for animal survival is detected already in bipolar cells. ...
Article
Full-text available
Motion sensing is a critical aspect of vision. We studied the representation of motion in mouse retinal bipolar cells and found that some bipolar cells are radially direction selective, preferring the origin of small object motion trajectories. Using a glutamate sensor, we directly observed bipolar cells synaptic output and found that there are radial direction selective and non-selective bipolar cell types, the majority being selective, and that radial direction selectivity relies on properties of the center-surround receptive field. We used these bipolar cell receptive fields along with connectomics to design biophysical models of downstream cells. The models and additional experiments demonstrated that bipolar cells pass radial direction selective excitation to starburst amacrine cells, which contributes to their directional tuning. As bipolar cells provide excitation to most amacrine and ganglion cells, their radial direction selectivity may contribute to motion processing throughout the visual system.
... And the stronger the external impact, the more frequent the signals. Work [Olveczky, B., Baccus S. & Meister M., 2003] shows how a network of ganglion cells can instantly detect a moving object and even emit several such objects. The results of the research presented in the works [Masland, R.H., 2001] [Wassle, H., 2004] [Olveczky, B., Baccus S. & Meister M., 2003] [Maass, W., 1997] reveal the principles of retinal operation when separating moving objects from the point of view of physiology. ...
... Work [Olveczky, B., Baccus S. & Meister M., 2003] shows how a network of ganglion cells can instantly detect a moving object and even emit several such objects. The results of the research presented in the works [Masland, R.H., 2001] [Wassle, H., 2004] [Olveczky, B., Baccus S. & Meister M., 2003] [Maass, W., 1997] reveal the principles of retinal operation when separating moving objects from the point of view of physiology. But is it possible to use this information to create artificial neural networks that can isolate moving objects just as quickly and accurately? ...
Article
Full-text available
This paper describes a neural network model based on the biological analog of the impulse neural network of the retina, which makes it possible to identify movement objects in a video image and a motion detector based on the retinal operation. The proposed detector is an alternative to detectors based on deterministic methods and traditional neural networks. It requires less computational resources at the same video image processing speed.
... For example, male crickets use them to differentiate their own chirps from the chirps of other males 7 , and electric fish effectively suppress the sensory input that should result from their own electrical production 8 . The same mechanisms are also responsible for the suppression of the perception of motion during saccadic eye movements in humans, so that the world appears stationary during those saccades 9 . In contrast, the perceived world appears to move when pressing our eye gently with a finger, due to the lack of efference copy in this situation. ...
Article
Full-text available
Forward models are mechanisms enabling an agent to predict the sensory outcomes of its actions. They can be implemented through efference copies: copies of motor signals inhibiting the expected sensory stimulation, literally canceling the perceptual outcome of the predicted action. In insects, efference copies are known to modulate optic flow detection for flight control in flies. Here we investigate whether forward models account for the detection of optic flow in walking ants, and how the latter is integrated for locomotion control. We mounted Cataglyphis velox ants in a virtual reality setup and manipulated the relationship between the ants’ movements and the optic flow perceived. Our results show that ants compute predictions of the optic flow expected according to their own movements. However, the prediction is not solely based on efference copies, but involves proprioceptive feedbacks and is fine-tuned by the panorama’s visual structure. Mismatches between prediction and perception are computed for each eye, and error signals are integrated to adjust locomotion through the modulation of internal oscillators. Our work reveals that insects’ forward models are non-trivial and compute predictions based on multimodal information.
... Instead, each pixel independently and asynchronously responds to changes in intensity within the environment. Due to their asynchronous nature, similar to the biological retina, they can accurately and efficiently capture motion information in natural scenes [4,5], particularly movements caused by dynamic objects [6,7]. This asynchronous characteristic makes event cameras well-suited for a wide range of applications, including target tracking, robotics, motion estimation, autonomous vehicles, and virtual reality. ...
Article
Full-text available
Event cameras, as bio-inspired visual sensors, offer significant advantages in their high dynamic range and high temporal resolution for visual tasks. These capabilities enable efficient and reliable motion estimation even in the most complex scenes. However, these advantages come with certain trade-offs. For instance, current event-based vision sensors have low spatial resolution, and the process of event representation can result in varying degrees of data redundancy and incompleteness. Additionally, due to the inherent characteristics of event stream data, they cannot be utilized directly; pre-processing steps such as slicing and frame compression are required. Currently, various pre-processing algorithms exist for slicing and compressing event streams. However, these methods fall short when dealing with multiple subjects moving at different and varying speeds within the event stream, potentially exacerbating the inherent deficiencies of the event information flow. To address this longstanding issue, we propose a novel and efficient Asynchronous Spike Dynamic Metric and Slicing algorithm (ASDMS). ASDMS adaptively segments the event stream into fragments of varying lengths based on the spatiotemporal structure and polarity attributes of the events. Moreover, we introduce a new Adaptive Spatiotemporal Subject Surface Compensation algorithm (ASSSC). ASSSC compensates for missing motion information in the event stream and removes redundant information, thereby achieving better performance and effectiveness in event stream segmentation compared to existing event representation algorithms. Additionally, after compressing the processed results into frame images, the imaging quality is significantly improved. Finally, we propose a new evaluation metric, the Actual Performance Efficiency Discrepancy (APED), which combines actual distortion rate and event information entropy to quantify and compare the effectiveness of our method against other existing event representation methods. The final experimental results demonstrate that our event representation method outperforms existing approaches and addresses the shortcomings of current methods in handling event streams with multiple entities moving at varying speeds simultaneously.
... 32 This inhibition underlies important computations in the retina, for example, detection of motion direction 33,34 or segregating objects from background. 35,36 ACs use diverse neurotransmitters, primarily GABA or glycine, but also others. Most ACs lack axons and possess synaptic output sites on their dendrites. ...
... ; https://doi.org/10.1101/2024.09.26.615305 doi: bioRxiv preprint and insensitive to global motion as might occur from eye movements. 33 Although these earlier studies used artificial gratings and measured only sensitivity rather than discriminability, our current analysis of the retinal neural code overall points to object motion sensitivity as a key function that drives the dynamic adaptation of discriminability for natural scenes. ...
Preprint
Full-text available
Sensory systems discriminate stimuli to direct behavioral choices, a process governed by two distinct properties - neural sensitivity to specific stimuli, and stochastic properties that importantly include neural correlations. Two questions that have received extensive investigation and debate are whether visual systems are optimized for natural scenes, and whether noise correlations contribute to this optimization. However, the lack of sufficient computational models has made these questions inaccessible in the context of the normal function of the visual system, which is to discriminate between natural stimuli. Here we take a direct approach to analyze discriminability under natural scenes for a population of salamander retinal ganglion cells using a model of the retinal neural code that captures both sensitivity and stochasticity. Using methods of information geometry and generative machine learning, we analyzed the manifolds of natural stimuli and neural responses, finding that discriminability in the ganglion cell population adapts to enhance information transmission about natural scenes, in particular about localized motion. Contrary to previous proposals, noise correlations reduce information transmission and arise simply as a natural consequence of the shared circuitry that generates changing spatiotemporal visual sensitivity. These results address a long-standing debate as to the role of retinal correlations in the encoding of natural stimuli and reveal how the highly nonlinear receptive fields of the retina adapt dynamically to increase information transmission under natural scenes by performing the important ethological function of local motion discrimination.
... This spatial nonlinearity is mediated via functional subunits in the receptive fields of retinal ganglion cells. These enable various specific computations that would be impossible without them, from sensitivity to fine spatial structures to various types of motion and pattern sensitivity [7][8][9][10][11][12][13]. Moreover, nonlinear spatial integration also plays a major role in shaping ganglion cell responses to natural stimuli [14][15][16]. ...
Article
Full-text available
Spatially nonlinear stimulus integration by retinal ganglion cells lies at the heart of various computations performed by the retina. It arises from the nonlinear transmission of signals that ganglion cells receive from bipolar cells, which thereby constitute functional subunits within a ganglion cell’s receptive field. Inferring these subunits from recorded ganglion cell activity promises a new avenue for studying the functional architecture of the retina. This calls for efficient methods, which leave sufficient experimental time to leverage the acquired knowledge for further investigating identified subunits. Here, we combine concepts from super-resolution microscopy and computed tomography and introduce super-resolved tomographic reconstruction (STR) as a technique to efficiently stimulate and locate receptive field subunits. Simulations demonstrate that this approach can reliably identify subunits across a wide range of model variations, and application in recordings of primate parasol ganglion cells validates the experimental feasibility. STR can potentially reveal comprehensive subunit layouts within only a few tens of minutes of recording time, making it ideal for online analysis and closed-loop investigations of receptive field substructure in retina recordings.
... 91 Responses of the 30-40 different types of mammalian RGCs 92,93 cover a wide range in terms of their response transience, therefore suggesting a large variety of visual tasks they perform. There has been a considerable collection of evidence supporting this view, including RGCs with transient responses that encode object movement [94][95][96][97] and the direction of motion, [98][99][100][101] whereas others with sustained responses have been proven to perceive luminosity contrast, 102 color contrast 103 or object orientation. 104 While the first cohort of these RGCs require a quick inactivation and corresponding decay of spiking frequency (transient response) in order to quickly recover and keep up with changes in the visual scene, sustained RGCs allow for the summation of inputs over an extended time frame to get more sensitized for minuscule differences of light levels (e.g., grayscale or color) within their receptive fields. ...
Article
Full-text available
Retinal ganglion cells (RGCs) summate inputs and forward a spike train code to the brain in the form of either maintained spiking (sustained) or a quickly decaying brief spike burst (transient). We report diverse response transience values across the RGC population and, contrary to the conventional transient/sustained scheme, responses with intermediary characteristics are the most abundant. Pharmacological tests showed that besides GABAergic inhibition, gap junction (GJ)–mediated excitation also plays a pivotal role in shaping response transience and thus visual coding. More precisely GJs connecting RGCs to nearbyamacrine and RGCs play a defining role in the process. These GJs equalize kinetic features, including the response transience of transient OFF alpha (tOFFa) RGCs across a coupled array. We propose that in other coupled neuron ensembles in the brain are also critical in the harmonization of response kinetics to enhance the population code and suit a corresponding task.
... These circuits may have various roles, from basic enhancements to the eye's visual input to extracting sophisticated visual features [2]. Many works investigate the cellular circuits of the retina and their role in extracting specific features from the eye's visual input [14], [15], [16], [17]. Although these investigations provide thorough insight into the retina's structure and function, their development is slow and difficult. ...
Article
Full-text available
It is a popular hypothesis in neuroscience that ganglion cells in the retina are activated by selectively detecting visual features in an observed scene. While ganglion cell firings can be predicted via data-trained deep neural nets, the networks remain indecipherable, thus providing little understanding of the cells' underlying operations. To extract knowledge from the cell firings, in this paper we learn an interpretable graph-based classifier from data to predict the firings of ganglion cells in response to visual stimuli. Specifically, we learn a positive semi-definite (PSD) metric matrix M0 M ≥ 0 that defines Mahalanobis distances between graph nodes (visual events) endowed with pre-computed feature vectors; the computed inter-node distances lead to edge weights and a combinatorial graph that is amenable to binary classification. Mathematically, we define the objective of metric matrix M \rm{M} optimization using a graph adaptation of large margin nearest neighbor (LMNN), which is rewritten as a semi-definite programming (SDP) problem. We solve it efficiently via a fast approximation called Gershgorin disc perfect alignment (GDPA) linearization. The learned metric matrix M \rm{M} provides interpretability: important features are identified along M \rm{M} 's diagonal, and their mutual relationships are inferred from off-diagonal terms. Our fast metric learning framework can be applied to other biological systems with pre-chosen features that require interpretation.
... This spatial nonlinearity is mediated via functional subunits in the receptive fields of retinal ganglion cells. These enable various specific computations that would be impossible without them, from sensitivity to fine spatial structures to various types of motion and pattern sensitivity (Ölveczky et al., 2003;Münch et al., 2009;Zhang et al., 2012;Krishnamoorthy et al., 2017;Zapp et al., 2022;Krüppel et al., 2023). Moreover, nonlinear spatial integration also plays a major role in shaping ganglion cell responses to natural stimuli (Cao et al., 2011;Turner and Rieke, 2016;Karamanlis and Gollisch, 2021). ...
Preprint
Full-text available
Spatially nonlinear stimulus integration by retinal ganglion cells lies at the heart of various computations performed by the retina. It arises from the nonlinear transmission of signals that ganglion cells receive from bipolar cells, which thereby constitute functional subunits within a ganglion cell's receptive field. Inferring these subunits from recorded ganglion cell activity promises a new avenue for studying the functional architecture of the retina. This calls for efficient methods, which leave sufficient experimental time to leverage the acquired knowledge. Here, we combine concepts from super-resolution microscopy and computed tomography and introduce super-resolved tomographic reconstruction (STR) as a technique to efficiently stimulate and locate receptive field subunits. Simulations demonstrate that this approach can reliably identify subunits across a wide range of model variations, and application in recordings of primate parasol ganglion cells validates the experimental feasibility. STR can potentially reveal comprehensive subunit layouts within less than an hour of recording time, making it ideal for online analysis and closed-loop investigations of receptive field substructure in retina recordings.
... Object motion relative to the observer drives oculomotor tracking [26,27] and is an essential part of many crucial behaviors, like prey capture. Specialized circuitry as early as the retina distinguishes between object and background motion [28,29], while entire brain regions in the visual cortex of primates specialize in processing motion [30] with increasing complexity along the dorsal stream [31]. ...
Preprint
Some of the most important tasks of visual and motor systems involve estimating the motion of objects and tracking them over time. Such systems evolved to meet the behavioral needs of the organism in its natural environment, and may therefore be adapted to the statistics of motion it is likely to encounter. By tracking the movement of individual points in videos of natural scenes, we begin to identify common properties of natural motion across scenes. As expected, objects in natural scenes move in a persistent fashion, with velocity correlations lasting hundreds of milliseconds. More subtly, we find that the observed velocity distributions are heavy-tailed and can be modeled as a Gaussian scale-mixture. Extending this model to the time domain leads to a dynamic scale-mixture model, consisting of a Gaussian process multiplied by a positive scalar quantity with its own independent dynamics. Dynamic scaling of velocity arises naturally as a consequence of changes in object distance from the observer, and may approximate the effects of changes in other parameters governing the motion in a given scene. This modeling and estimation framework has implications for the neurobiology of sensory and motor systems, which need to cope with these fluctuations in scale in order to represent motion efficiently and drive fast and accurate tracking behavior.
... Building precise computational models of neural response to natural visual stimuli is a fundamental scientific problem in sensory neuroscience. These models can offer insights into neural circuit computations, reveal new mechanisms, and validate theoretical predictions [1,2,3,4,5,6]. However, constructing such models is challenging due to the complex nonlinear processes involved in neural coding, such as synaptic transmission and spiking dynamics. ...
Conference Paper
Full-text available
Developing computational models of neural response is crucial for understanding sensory processing and neural computations. Current state-of-the-art neural network methods use temporal filters to handle temporal dependencies, resulting in an unrealistic and inflexible processing paradigm. Meanwhile, these methods target trial-averaged firing rates and fail to capture important features in spike trains. This work presents the temporal conditioning spiking latent variable models (TeCoS-LVM) to simulate the neural response to natural visual stimuli. We use spiking neurons to produce spike outputs that directly match the recorded trains. This approach helps to avoid losing information embedded in the original spike trains. We exclude the temporal dimension from the model parameter space and introduce a temporal conditioning operation to allow the model to adaptively explore and exploit temporal dependencies in stimuli sequences in a natural paradigm. We show that TeCoS-LVM models can produce more realistic spike activities and accurately fit spike statistics than powerful alternatives. Additionally, learned TeCoS-LVM models can generalize well to longer time scales. Overall, while remaining computationally tractable, our model effectively captures key features of neural coding systems. It thus provides a useful tool for building accurate predictive computational accounts for various sensory perception circuits.
... [31,32] In biology, the detection of moving target is achieved by the collaboration of bipolar cells and retinal neurons. [33] To replicate this function, we reassemble the m × n pixels (m and n represent the length and width of target image, respectively) using the long-term potentiation (LTP) and long-term depression (LTD) nonlinear index [34][35][36][37] of the TPPS and then perform frame difference calculations. Among them, the nonlinear LTD index is fitted by the inhibition of post-synaptic potential under the action of post light pulse voltage (1 V with 10 Hz, Figures S14 and S15, Supporting Information). ...
Article
Full-text available
Neuromorphic vision based on photonic synapses has the ability to mimic sensitivity, adaptivity, and sophistication of bio‐visual systems. Significant advances in artificial photosynapses are achieved recently. However, conventional photosyanptic devices normally employ opaque metal conductors and vertical device configuration, performing a limited hemispherical field of view. Here, a transparent planar photonic synapse (TPPS) is presented that offers dual‐side photosensitive capability for nearly panoramic neuromorphic vision. The TPPS consisting of all two dimensional (2D) carbon‐based derivatives exhibits ultra‐broadband photodetecting (365–970 nm) and ≈360° omnidirectional viewing angle. With its intrinsic persistent photoconductivity effect, the detector possesses bio‐synaptic behaviors such as short/long‐term memory, experience learning, light adaptation, and a 171% pair‐pulse‐facilitation index, enabling the synapse array to achieve image recognition enhancement (92%) and moving object detection.
... Defocused images have blurred edges, different focused planes, and varied light intensities compared with focused images. Amacrine cells may provide feedback inhibition, surround inhibition, adaptation, signal averaging, and noise reduction (Olveczky et al., 2003) to the signaling from RGCs, suggesting that they may take part in encoding focused/defocused images. As a control, Cx36 KO mice lose the filters in the outer and inner retina. ...
Article
Full-text available
The etiology of myopia remains unclear. This study investigated whether retinal ganglion cells (RGCs) in the myopic retina encode visual information differently from the normal retina and to determine the role of Connexin (Cx) 36 in this process. Generalized linear models (GLMs), which can capture stimulus-dependent changes in real neurons with spike timing precision and reliability, were used to predict RGCs responses to focused and defocused images in the retinas of wild-type (normal) and Lens-Induced Myopia (LIM) mice. As the predominant subunit of gap junctions in the mouse retina and a plausible modulator in myopia development, Cx36 knockout (KO) mice were used as a control for an intact retinal circuit. The kinetics of excitatory postsynaptic currents (EPSCs) of a single αRGC could reflect projection of both focused and defocused images in the retinas of normal and LIM, but not in the Cx36 knockout mice. Poisson GLMs revealed that RGC encoding of visual stimuli in the LIM retina was similar to that of the normal retina. In the LIM retinas, the linear-Gaussian GLM model with offset was a better fit for predicting the spike count under a focused image than the defocused image. Akaike information criterion (AIC) indicated that nonparametric GLM (np-GLM) model predicted focused/defocused images better in both LIM and normal retinas. However, the spike counts in 33% of αRGCs in LIM retinas were better fitted by exponential GLM (exp-GLM) under defocus, compared to only 13% αRGCs in normal retinas. The difference in encoding performance between LIM and normal retinas indicated the possible amendment and plasticity of the retinal circuit in myopic retinas. The absence of a similar response between Cx36 KO mice and normal/LIM mice might suggest that Cx36, which is associated with myopia development, play a role in encoding focused and defocused images.
... Building precise computational models of neural response to natural visual stimuli is a fundamental scientific problem in sensory neuroscience. These models can offer insights into neural circuit computations, reveal new mechanisms, and validate theoretical predictions [1,2,3,4,5,6]. However, constructing such models is challenging due to the complex nonlinear processes involved in neural coding, such as synaptic transmission and spiking dynamics. ...
Preprint
Full-text available
Developing computational models of neural response is crucial for understanding sensory processing and neural computations. Current state-of-the-art neural network methods use temporal filters to handle temporal dependencies, resulting in an unrealistic and inflexible processing flow. Meanwhile, these methods target trial-averaged firing rates and fail to capture important features in spike trains. This work presents the temporal conditioning spiking latent variable models (TeCoS-LVM) to simulate the neural response to natural visual stimuli. We use spiking neurons to produce spike outputs that directly match the recorded trains. This approach helps to avoid losing information embedded in the original spike trains. We exclude the temporal dimension from the model parameter space and introduce a temporal conditioning operation to allow the model to adaptively explore and exploit temporal dependencies in stimuli sequences in a natural paradigm. We show that TeCoS-LVM models can produce more realistic spike activities and accurately fit spike statistics than powerful alternatives. Additionally, learned TeCoS-LVM models can generalize well to longer time scales. Overall, while remaining computationally tractable, our model effectively captures key features of neural coding systems. It thus provides a useful tool for building accurate predictive computational accounts for various sensory perception circuits.
... The visual pathway has been shown to be important in hunting [8,20,30]. For example, retinal ganglion cells (RGCs) that are sensitive to object motion, detect visual features of prey [31] including size, location, motion, and contrast polarity [8,30]. Interestingly, a visual "releasing signal" can trigger the predatory motor sequence in toads [17], where a visual bar that moves in a direction parallel to the toad's body orientation is sufficient to elicit a predatory attack [17]. ...
Article
Full-text available
Predatory hunting is an important type of innate behavior evolutionarily conserved across the animal kingdom. It is typically composed of a set of sequential actions, including prey search, pursuit, attack, and consumption. This behavior is subject to control by the nervous system. Early studies used toads as a model to probe the neuroethology of hunting, which led to the proposal of a sensory-triggered release mechanism for hunting actions. More recent studies have used genetically-trackable zebrafish and rodents and have made breakthrough discoveries in the neuroethology and neurocircuits underlying this behavior. Here, we review the sophisticated neurocircuitry involved in hunting and summarize the detailed mechanism for the circuitry to encode various aspects of hunting neuroethology, including sensory processing, sensorimotor transformation, motivation, and sequential encoding of hunting actions. We also discuss the overlapping brain circuits for hunting and feeding and point out the limitations of current studies. We propose that hunting is an ideal behavioral paradigm in which to study the neuroethology of motivated behaviors, which may shed new light on epidemic disorders, including binge-eating, obesity, and obsessive-compulsive disorders.
... Furthermore, it has been observed that the way drift transforms spatial patterns into a spatiotemporal flow on the retina implements a crucial information-processing step tuned to the characteristics of the natural visual world (6). This transformation discards redundant information and enhances neural responses to luminance discontinuities, processes long argued to be important goals of early visual processing (21,25,(29)(30)(31), which is expected given that neurons in the retina and the early visual system are relatively insensitive to an unchanging input (32). ...
Article
Full-text available
Visual acuity is commonly assumed to be determined by the eye optics and spatial sampling in the retina. Unlike a camera, however, the eyes are never stationary during the acquisition of visual information; a jittery motion known as ocular drift incessantly displaces stimuli over many photoreceptors. Previous studies have shown that acuity is impaired in the absence of retinal image motion caused by eye drift. However, the relation between individual drift characteristics and acuity remains unknown. Here, we show that a) healthy emmetropes exhibit a large variability in their amount of drift and that b) these differences profoundly affect the structure of spatiotemporal signals to the retina. We further show that c) the spectral distribution of the resulting luminance modulations strongly correlates with individual visual acuity and that d) natural intertrial fluctuations in the amount of drift modulate acuity. As a consequence, in healthy emmetropes, acuity can be predicted from the motor behavior elicited by a simple fixation task, without directly measuring it. These results shed new light on how oculomotor behavior contributes to fine spatial vision.
... In addition to the motor-related gain modulation, small object detecting glomeruli are modulated by a visual surround that is tuned to widefield, coherent visual motion that would normally be associated with locomotion. This is similar to motion-tuned surrounds in object motion-sensitive cells in the vertebrate retina (Baccus et al., 2008;Olveczky et al., 2003), and in figure detecting neurons of the blowfly, which are suppressed by optic flow produced by self-motion (Egelhaaf, 1985;Kimmerle and Egelhaaf, 2000). Why would the fly visual system rely on these two seemingly redundant cues to estimate self-motion? ...
Article
Full-text available
Natural vision is dynamic: as an animal moves, its visual input changes dramatically. How can the visual system reliably extract local features from an input dominated by self-generated signals? In Drosophila, diverse local visual features are represented by a group of projection neurons with distinct tuning properties. Here we describe a connectome-based volumetric imaging strategy to measure visually evoked neural activity across this population. We show that local visual features are jointly represented across the population, and that a shared gain factor improves trial-to-trial coding fidelity. A subset of these neurons, tuned to small objects, is modulated by two independent signals associated with self-movement, a motor-related signal and a visual motion signal associated with rotation of the animal. These two inputs adjust the sensitivity of these feature detectors across the locomotor cycle, selectively reducing their gain during saccades and restoring it during intersaccadic intervals. This work reveals a strategy for reliable feature detection during locomotion.
... One possible source of supervision would be a population of neurons located within the same or different area that have access to different visual cues, such as cells sensitive to motion-defined boundaries. Such cells are found at many levels of the visual system, including the retinas of salamanders (Olveczky et al., 2003); V1, V2, V3, middle temporal cortex, and inferior temporal cortex in the monkey (Marcar et al., 1995(Marcar et al., , 2000Sáry et al., 1995;Zeki et al., 2003); and in multiple areas of the human visual cortex (DuPont et al., 1997;Zeki et al., 2003;Mysore et al., 2006;Larsson et al., 2010). Topographic feedback projections from motion boundarysensitive cells in these areas to V1 (or locally within V1) could help to instruct boundary cells in V1 so that they may perform well based purely on pictorial cues (i.e., when motion signals are unavailable). ...
Article
Full-text available
Detecting object boundaries is crucial for recognition, but how the process unfolds in visual cortex remains unknown. To study the problem faced by a hypothetical boundary cell, and to predict how cortical circuitry could produce a boundary cell from a population of conventional "simple cells", we labeled 30,000 natural image patches and used Bayes' rule to help determine how a simple cell should influence a nearby boundary cell depending on its relative offset in receptive field position and orientation. We identified three basic types of cell-cell interactions: rising and falling interactions with a range of slopes and saturation rates, as well as non-monotonic (bump-shaped) interactions with varying modes and amplitudes. Using simple models we show that a ubiquitous cortical circuit motif consisting of direct excitation and indirect inhibition - a compound effect we call "incitation" - can produce the entire spectrum of simple cell-boundary cell interactions found in our dataset. Moreover, we show that the synaptic weights that parameterize an incitation circuit can be learned by a single-layer "delta" rule. We conclude that incitatory interconnections are a generally useful computing mechanism that the cortex may exploit to help solve difficult natural classification problems.SIGNIFICANCE STATEMENT:Simple cells in primary visual cortex (V1) respond to oriented edges, and have long been supposed to detect object boundaries, yet the prevailing model of a simple cell - a divisively normalized linear filter - is a surprisingly poor natural boundary detector. To understand why, we analyzed image statistics on and off object boundaries, allowing us to characterize the neural-style computations needed to perform well at this difficult natural classification task. We show that a simple circuit motif known to exist in V1 is capable of extracting high-quality boundary probability signals from local populations of simple cells. Our findings suggest a new, more general way of conceptualizing cell-cell interconnections in the cortex.
... However, there lies between devices and realistic systems a gap that needs to be filled by systematic solutions, i.e., the "blueprints" of neuromorphic computing devices based on artificial synapses. The blueprints should include practical circuit implementations and hierarchy, feasible approaches to replicate natural neural networks and network architectures optimized for hardware [64,201,202] to construct sophisticated neuromorphic hardware from bottom to top. As a basic hardware solution, Yang et al. reported a 2 × 2 MAPbI 3 perovskite memristor-based neural network, with a conceptual framework shown in Fig. 12a [45]. ...
Article
Emerging brain-inspired neuromorphic computing systems have become a potential candidate for overcoming the von Neuman bottleneck that limits the performance of most modern computers. Artificial synapses, used to mimic neural transmission and physical information sensing, could build highly robust and efficient computing systems similar to our brains. The employment of nanomaterials in the devices, and the device structures, are receiving a surge of interest, given the various benefits in better carrier dynamics, higher conductance, photonic interaction and photocarrier trapping, and the architectural feasibility with two and three-terminal devices. Moreover, the combination of artificial synapses and various nanomaterial-based active channels also enables visual recognition, multi-modality sensing-processing systems, hardware neural networks, etc., demonstrating appealing possibilities for practical applications. Here, we summarize the recent advances in synaptic devices based on low-dimensional nanomaterials, the novel devices with hybrid materials or structures, as well as implementation schemes of hardware neural networks. By the end of this review, we discuss the engineering issues including control methods, design complexity and fabrication process to be addressed, and envision the future developments of artificial synapse-based neuromorphic systems.
... Indeed, in the peripheral saccade condition, the modulation index for most ON RGCs was around 0 in the presence of the GABA A receptor antagonist SR-95531 ( Fig. 2b and Supplementary Fig. 5d). These results suggest that this short-lived global component of suppression is caused by inhibition via GABAergic amacrine cells, perhaps similar to the polyaxonal amacrine cells described previously 20,39,41 . Thus, while suppression is indeed partially caused by circuits detecting global changes across the retina, those circuits seem to act predominantly on ON RGCs, and even there, they only account for a fraction of the total suppression observed with full-field saccades (without mask), which lasts longer. ...
Article
Full-text available
Visual perception remains stable across saccadic eye movements, despite the concurrent strongly disruptive visual flow. This stability is partially associated with a reduction in visual sensitivity, known as saccadic suppression, which already starts in the retina with reduced ganglion cell sensitivity. However, the retinal circuit mechanisms giving rise to such suppression remain unknown. Here, we describe these mechanisms using electrophysiology in mouse, pig, and macaque retina, 2-photon calcium imaging, computational modeling, and human psychophysics. We find that sequential stimuli, like those that naturally occur during saccades, trigger three independent suppressive mechanisms in the retina. The main mechanism is triggered by contrast-reversing sequential stimuli and originates within the receptive field center of ganglion cells. It does not involve inhibition or other known suppressive mechanisms like saturation or adaptation. Instead, it relies on temporal filtering of the inherently slow response of cone photoreceptors coupled with downstream nonlinearities. Two further mechanisms of suppression are present predominantly in ON ganglion cells and originate in the receptive field surround, highlighting another disparity between ON and OFF ganglion cells. The mechanisms uncovered here likely play a role in shaping the retinal output following eye movements and other natural viewing conditions where sequential stimulation is ubiquitous.
... In particular, hybrid stimuli that retain some aspects of natural stimuli while simplifying others are useful for identifying candidate retinal functions and related mechanisms. Examples, which have also been discussed in this article, include natural time courses of light intensity and chromatic components with no spatial structure (Angueyra et al. 2022, Endeman & Kamermans 2010, Howlett et al. 2017, van Hateren et al. 2002), static presentations of natural photographic images , Cao et al. 2011, Karamanlis & Gollisch 2021, Turner & Rieke 2016, and eye-movement dynamics with simplified spatial patterns (Idrees et al. 2020, Krishnamoorthy et al. 2017, Kühn & Gollisch 2019, Ölveczky et al. 2003. Finding a good balance of naturalistic stimulus patterns and artificial simplifications and subsequently identifying the right natural stimuli for testing whether the hybrid stimulus results generalize to truly natural scenarios will be among the critical steps toward a comprehensive understanding of retinal encoding of natural scenes. ...
Article
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
... Among those types, ON and OFF RGCs are known to play a critical role in forming visual percepts (Schiller et al., 1986;Schiller, 1992). In addition to the asymmetricities between light-evoked responses of the ON vs. the OFF pathways (Ölveczky et al., 2003;Margolis and Detwiler, 2007;Liang and Freed, 2012;Freed, 2017), retinal prosthetic studies reported contrasting differences between the two pathways (Freeman et al., 2010;Kameneva et al., 2010;Twyford et al., 2014;Fried, 2015, 2016a;Lee and Im, 2019). However, given the unique mosaic arrangement of each type of RGCs (DeVries and Baylor, 1997;Masland, 2012), it seems almost inevitable to activate every type of RGCs located near a given electrode delivering electric stimulation. ...
Article
Full-text available
Numerous retinal prosthetic systems have demonstrated somewhat useful vision can be restored to individuals who had lost their sight due to outer retinal degenerative diseases. Earlier prosthetic studies have mostly focused on the confinement of electrical stimulation for improved spatial resolution and/or the biased stimulation of specific retinal ganglion cell (RGC) types for selective activation of retinal ON/OFF pathway for enhanced visual percepts. To better replicate normal vision, it would be also crucial to consider information transmission by spiking activities arising in the RGC population since an incredible amount of visual information is transferred from the eye to the brain. In previous studies, however, it has not been well explored how much artificial visual information is created in response to electrical stimuli delivered by microelectrodes. In the present work, we discuss the importance of the neural information for high-quality artificial vision. First, we summarize the previous literatures which have computed information transmission rates from spiking activities of RGCs in response to visual stimuli. Second, we exemplify a couple of studies which computed the neural information from electrically evoked responses. Third, we briefly introduce how information rates can be computed in the representative two ways – direct method and reconstruction method. Fourth, we introduce in silico approaches modeling artificial retinal neural networks to explore the relationship between amount of information and the spiking patterns. Lastly, we conclude our review with clinical implications to emphasize the necessity of considering visual information transmission for further improvement of retinal prosthetics.
... Parmi leurs fonctions, les cellules amacrines se retrouvent impliquées dans les calculs permettant la détection du mouvement d'un objet, c'est-à-dire qu'elles vont fournir un signal qui distingue le mouvement réel d'un objet par rapport à l'arrière-plan global de la scène des mouvements propres de l'observateur (Olveczky, Baccus, et Meister 2003). Elles sont aussi en mesure de générer une excitation directe sur les cellules ganglionnaires qui sont sélectives pour la direction du mouvement (Hausselt et al. 2007). ...
Thesis
Le traitement des configurations spatiales est un mécanisme qui intervient en permanence au sein du cortex visuel. Dans ce monde emplit de régularités qui est le nôtre, il tient une place prépondérante dans l'analyse des objets de notre environnement en nous permettant d'établir des relations spatiales entre des ensembles d'éléments pour aboutir à une perception globale. Si certaines caractéristiques de ces mécanismes ont été étudiés chez les primate humain et non-humain, les observations issues de ces études ont été majoritairement portées par des approches différentes dont les méthodes non-invasives en neuroimagerie sont privilégiées chez l'humain et les méthodes plus invasives tel que l'électrophysiologie sont favorisées chez le singe. Bien qu'elles soient un support critique dans la compréhension des mécanismes neuronaux, les connaissances issues d'enregistrements unitaires chez le singe ne peuvent être transposées à l'humain qu'une fois l'identification d'homologies et de différences fonctionnelles établie à partir des mêmes approches expérimentales. Pour ce faire, nous proposons dans cette thèse de répondre aux besoins d'études comparatives entre les deux espèces dans le cadre du traitement visuel des configurations spatiales portant sur le traitement de la symétrie et le traitement configural des visages par une approche en IRMf. Une première étude menée en collaboration avec des chercheurs de l'Université de Stanford nous a permis d'étudier les réponses à des stimuli texturaux englobant des motifs de symétrie chez le macaque. Nous avons pu mettre en évidence (1) un réseau cortical de traitement de la symétrie par rotation similaire entre les primates humains et non-humains, (2) des réponses augmentant de manière paramétrique avec l'ordre de symétrie présenté (n rotations) (3) un réseau similaire de traitement de la symétrie par rotation et par réflexion chez le macaque (4) des réponses plus fortes pour des motifs symétriques à deux axes (horizontale et verticale) plutôt qu'un seul axe (horizontal). Nous avons ainsi observé que les réponses à la symétrie chez le macaque débutaient au-delà de V1, dans un réseau comprenant les aires V2, V3, V3A, V4 semblablement à l'humain mais également des réponses paramétriques à l'ordre de symétrie par rotation dans les aires V3, V4 et PITd tout comme reporté chez les sujets humains. En somme, l'ensemble de ces résultats ont mis en évidence le réseau cortical du traitement de la symétrie jusqu'alors jamais observé chez le macaque, supporté par des aires visuelles homologues à celles de l'humain. Ces résultats ouvrent de nouvelles pistes quant à la compréhension des mécanismes neuronaux unitaires par des approches plus invasives chez le singe, tout particulièrement dans l'aire V3 qui semble jouer un rôle important dans le traitement sophistiqué des paramètres de configurations spatiales. La secondé étude de ce projet de thèse visait à étudier les mécanismes de reconnaissance de l'identité faciale chez le singe à travers l'orientation configurale des visages porté par l'objectif de réaliser une comparaison inter-espèces du traitement holistique des visages. S'il est largement admis que l'humain est un expert de l'identification des visages dont les mécanismes dépendent de l'orientation dans laquelle ils sont présentés, les résultats sont bien plus contradictoires chez le singe. Pour résoudre ces contradictions, nous avons mis en place un protocole innovant visant à mesurer l'effet d'inversion chez les deux espèces qui ne nécessitait ni entrainement ni tâche comportementale. Cette étude menée en collaboration avec B. Rossion demeure en cours d'acquisition. Néanmoins, les données pourraient fournir des preuves de mécanismes fonctionnels distincts entre celles-ci, appelant à une potentielle réévaluation de l'utilisation du macaque dans l'étude et la compréhension des processus de reconnaissance de l'identité faciale chez l'humain.
... Vertebrate retinae are equipped with retinal ganglion cells selective for motion directions 80 as well as small objects. [81][82][83] The axon terminals of motion-and objectselective ganglion cells innervate the shallowest layers of optic tectum in zebrafish 84 as well as of superior colliculus in mice. 85 Although the internal circuitry of the optic tectum/superior colliculus is still not well understood, physiological studies on the neural bases of prey capture in larval zebrafish have identified tectal neurons that show direction-selective responses to small objects similar to those in LPLC1. ...
Article
Visual motion provides rich geometrical cues about the three-dimensional configuration of the world. However, how brains decode the spatial information carried by motion signals remains poorly understood. Here, we study a collision-avoidance behavior in Drosophila as a simple model of motion-based spatial vision. With simulations and psychophysics, we demonstrate that walking Drosophila exhibit a pattern of slowing to avoid collisions by exploiting the geometry of positional changes of objects on near-collision courses. This behavior requires the visual neuron LPLC1, whose tuning mirrors the behavior and whose activity drives slowing. LPLC1 pools inputs from object and motion detectors, and spatially biased inhibition tunes it to the geometry of collisions. Connectomic analyses identified circuitry downstream of LPLC1 that faithfully inherits its response properties. Overall, our results reveal how a small neural circuit solves a specific spatial vision task by combining distinct visual features to exploit universal geometrical constraints of the visual world.
Article
Detecting the motion of an object relative to a world-fixed frame of reference is an exquisite human capability [G. E. Legge, F. Campbell, Vis. Res. 21 , 205–213 (1981)]. However, there is a special condition where humans are unable to accurately detect relative motion: Images moving in a direction consistent with retinal slip where the motion is unnaturally amplified can, under some conditions, appear stable [D. W. Arathorn, S. B. Stevenson, Q. Yang, P. Tiruveedhula, A. Roorda, J. Vis. 13 , 22 (2013)]. We asked: Is world-fixed retinal image background content necessary for the visual system to compute the direction of eye motion, and consequently generate stable percepts of images moving with amplified slip? Or, are nonvisual cues sufficient? Subjects adjusted the parameters of a stimulus moving in a random trajectory to match the perceived motion of images moving contingent to the retina. Experiments were done with and without retinal image background content. The perceived motion of stimuli moving with amplified retinal slip was suppressed in the presence of a visible background; however, higher magnitudes of motion were perceived under conditions when there was none. Our results demonstrate that the presence of retinal image background content is essential for the visual system to compute its direction of motion. The visual content that might be thought to provide a strong frame of reference to detect amplified retinal slips, instead paradoxically drives the misperception of relative motion.
Preprint
Full-text available
The retina extracts chromatic information present in an animal’s environment. In the mouse, the feed-forward, excitatory pathway through the retina is dominated by a chromatic gradient, with green and UV signals primarily processed in the dorsal and ventral retina, respectively. However, at the output of the retina, chromatic tuning is more mixed, suggesting that amacrine cells alter spectral tuning. We genetically targeted the population of 40+ GABAergic amacrine cell types and used two-photon calcium imaging to systematically survey chromatic responses in their dendritic processes. We found that amacrine cells show diverse chromatic responses in different spatial regions of their receptive fields and across the dorso-ventral axis of the retina. Compared to their excitatory inputs from bipolar cells, amacrine cells are less chromatically tuned and less likely to be colour-opponent. We identified 25 functional amacrine cell types that, in addition to their chromatic properties, exhibit distinctive achromatic receptive field properties. A combination of pharmacological interventions and a biologically-inspired deep learning model revealed how lateral inhibition and recurrent excitatory inputs shape chromatic properties of amacrine cells. Our data suggest that amacrine cells balance the strongly biased spectral tuning of excitation in the mouse retina and thereby support increased diversity in chromatic information of the retinal output.
Preprint
Full-text available
Detecting conspicuous stimuli in a visual scene is crucial for animal survival, yet it remains debated how the brain encodes visual salience. Here we investigate how visual salience is represented in the superficial superior collicular (sSC) of awake mice using two-photon calcium imaging. We report on a feature-independent salience map in the sSC. Specifically, conspicuous stimuli evoke stronger responses in both excitatory and inhibitory neurons compared to uniform stimuli, with similar encoding patterns observed in both neuron types. The largest response occurs when a salient stimulus is positioned at the receptive field center, with contextual effects extending ∼40° away from the center. The response amplitude correlates well with the salience strength of stimuli and is not influenced by the orientation or motion direction preferences of neurons. Furthermore, visual salience is encoded in a feature-independent manner, and neurons involved in salience encoding are less likely to exhibit orientation or direction selectivity.
Article
Full-text available
Intelligent vision necessitates the deployment of detectors that are always‐on and low‐power, mirroring the continuous and uninterrupted responsiveness characteristic of human vision. Nonetheless, contemporary artificial vision systems attain this goal by the continuous processing of massive image frames and executing intricate algorithms, thereby expending substantial computational power and energy. In contrast, biological data processing, based on event‐triggered spiking, has higher efficiency and lower energy consumption. Here, this work proposes an artificial vision architecture consisting of spiking photodetectors and artificial synapses, closely mirroring the intricacies of the human visual system. Distinct from previously reported techniques, the photodetector is self‐powered and event‐triggered, outputting light‐modulated spiking signals directly, thereby fulfilling the imperative for always‐on with low‐power consumption. With the spiking signals processing through the integrated synapse units, recognition of graphics, gestures, and human action has been implemented, illustrating the potent image processing capabilities inherent within this architecture. The results prove the 90% accuracy rate in human action recognition within a mere five epochs utilizing a rudimentary artificial neural network. This novel architecture, grounded in spiking photodetectors, offers a viable alternative to the extant models of always‐on low‐power artificial vision system.
Article
Full-text available
In early sensory systems, cell-type diversity generally increases from the periphery into the brain, resulting in a greater heterogeneity of responses to the same stimuli. Surround suppression is a canonical visual computation that begins within the retina and is found at varying levels across retinal ganglion cell types. Our results show that heterogeneity in the level of surround suppression occurs subcellularly at bipolar cell synapses. Using single-cell electrophysiology and serial block-face scanning electron microscopy, we show that two retinal ganglion cell types exhibit very different levels of surround suppression even though they receive input from the same bipolar cell types. This divergence of the bipolar cell signal occurs through synapse-specific regulation by amacrine cells at the scale of tens of microns. These findings indicate that each synapse of a single bipolar cell can carry a unique visual signal, expanding the number of possible functional channels at the earliest stages of visual processing.
Article
Full-text available
Neural computations arise from highly precise connections between specific types of neurons. Retinal ganglion cells (RGCs) with similar stratification patterns are positioned to receive similar inputs but often display different response properties. In this study, we used intersectional mouse genetics to achieve single-cell type labeling and identified an object motion sensitive (OMS) AC type, COMS-AC(counter-OMS AC). Optogenetic stimulation revealed that COMS-AC makes glycinergic synapses with the OMS-insensitive HD2p-RGC, while chemogenetic inactivation showed that COMS-AC provides inhibitory control to HD2p-RGC during local motion. This local inhibition, combined with the inhibitory drive from TH2-AC during global motion, explains the OMS-insensitive feature of HD2p-RGC. In contrast, COMS-AC fails to make synapses with W3(UHD)-RGC, allowing it to exhibit OMS under the control of VGlut3-AC and TH2-AC. These findings reveal modular interneuron circuits that endow structurally similar RGC types with different responses and present a mechanism for redundancy-reduction in the retina to expand coding capacity.
Preprint
Full-text available
The prevailing hierarchical view of the visual system consists of parallel circuits that begin in the retina, which then sum effects across sequential levels, increasing in complexity. Yet a separate type of interaction, whereby one visual pattern changes the influence of another, known as modulation, has received much less attention in terms of its circuit mechanisms. Retinal amacrine cells are a diverse class of inhibitory interneurons that are thought to have modulatory effects, but we lack a general understanding of their functional types. Using dynamic causal experiments in the salamander retina perturbing amacrine cells along with an unsupervised computational framework, we find that amacrine cell modulatory effects cluster into two distinct types. One type controls ganglion cell sensitivity to individual visual features, and a second type controls the ganglion cell’s output gain, acting to gate all features. These results establish three separate general roles of amacrine cells – to generate primary visual features, to use context to select specific visual features and to gate retinal output.
Article
Full-text available
Neuro-inspired vision systems hold great promise to address the growing demands of mass data processing for edge computing, a distributed framework that brings computation and data storage closer to the sources of data. In addition to the capability of static image sensing and processing, the hardware implementation of a neuro-inspired vision system also requires the fulfilment of detecting and recognizing moving targets. Here, we demonstrated a neuro-inspired optical sensor based on two-dimensional NbS2/MoS2 hybrid films, which featured remarkable photo-induced conductance plasticity and low electrical energy consumption. A neuro-inspired optical sensor array with 10 × 10 NbS2/MoS2 phototransistors enabled highly integrated functions of sensing, memory, and contrast enhancement capabilities for static images, which benefits convolutional neural network (CNN) with a high image recognition accuracy. More importantly, in-sensor trajectory registration of moving light spots was experimentally implemented such that the post-processing could yield a high restoration accuracy. Our neuro-inspired optical sensor array could provide a fascinating platform for the implementation of high-performance artificial vision systems.
Article
Human vision relies on a tiny region of the retina, the 1-deg foveola, to achieve high spatial resolution. Foveal vision is of paramount importance in daily activities, yet its study is challenging, as eye movements incessantly displace stimuli across this region. Here I will review work that, building on recent advances in eye-tracking and gaze-contingent display, examines how attention and eye movements operate at the foveal level. This research highlights how exploration of fine spatial detail unfolds following visuomotor strategies reminiscent of those occurring at larger scales. It shows that, together with highly precise control of attention, this motor activity is linked to non-homogenous processing within the foveola and selectively modulates sensitivity both in space and time. Overall, the picture emerges of a highly dynamic foveal perception in which fine spatial vision, rather than simply being the result of placing a stimulus at the center of gaze, is the result of a finely tuned and orchestrated synergy of motor, cognitive, and attentional processes.
Article
Full-text available
Sending an axon out of the eye and into the target brain nuclei is the defining feature of retinal ganglion cells (RGCs). The literature on RGC axon pathfinding is vast, but it focuses mostly on decision making events such as midline crossing at the optic chiasm or retinotopic mapping at the target nuclei. In comparison, the exit of RGC axons out of the eye is much less explored. The first checkpoint on the RGC axons’ path is the optic cup - optic stalk junction (OC-OS). OC-OS development and the exit of the RGC pioneer axons out of the eye are coordinated spatially and temporally. By the time the optic nerve head domain is specified, the optic fissure margins are in contact and the fusion process is ongoing, the first RGCs are born in its proximity and send pioneer axons in the optic stalk. RGC differentiation continues in centrifugal waves. Later born RGC axons fasciculate with the more mature axons. Growth cones at the end of the axons respond to guidance cues to adopt a centripetal direction, maintain nerve fiber layer restriction and to leave the optic cup. Although there is extensive information on OC-OS development, we still have important unanswered questions regarding its contribution to the exit of the RGC axons out of the eye. We are still to distinguish the morphogens of the OC-OS from the axon guidance molecules which are expressed in the same place at the same time. The early RGC transcription programs responsible for axon emergence and pathfinding are also unknown. This review summarizes the molecular mechanisms for early RGC axon guidance by contextualizing mouse knock-out studies on OC-OS development with the recent transcriptomic studies on developing RGCs in an attempt to contribute to the understanding of human optic nerve developmental anomalies. The published data summarized here suggests that the developing optic nerve head provides a physical channel (the closing optic fissure) as well as molecular guidance cues for the pioneer RGC axons to exit the eye.
Article
In recent years, two-dimensional (2D) materials-based fundamental preparing process such as high-quality wafer-level single crystal thin film synthesis technology and high-performance electrode preparing technology has developed rapidly. In addition, the integrated application prospect of 2D materials has been preliminarily verified, owing to the flat and clean interface between 2D materials and substrates. From the perspective of electronics and optoelectronics based on 2D materials, this paper will summarize the recent studies of integrated circuit hardware, integrated optoelectronic hardware, and hetero-integrated hardware, showing their advantages and potential application.
Article
Efficient motion detection is essential for the Internet of Things. However, it suffers from overloading redundant static background information. Inspired by the human visual system, which is efficient in motion detection, we propose a silicon-based retinomorphic photodetector with a simple metal/insulator/semiconductor (MIS) structure, which is compatible with the complementary metal-oxide-semiconductor (CMOS) industry. In contrast to conventional photodetectors that generate a sustained photocurrent, our retinomorphic photodetector is sensitive only to the change in light intensity and therefore filters the redundant static background. In addition, it shows logarithmic dependence on the light intensity, which simplifies the contrast ratio measurement. Based on our moving object recognition experiment, after filtering background information, the information to be analyzed is reduced to 27.3%, thereby improving the image recognition efficiency in the subsequent processing tasks. This innovative and industry-compatible retinomorphic photodetector will facilitate the construction of future efficient motion detection systems.
Preprint
Full-text available
Feedforward models are mechanisms enabling an agent to predict the sensory outcomes of its actions. It can be implemented in the nervous system in the form of efference copies, which are copies of motor signals that are subtracted from the sensory stimulation actually detected, literally cancelling the perceptual outcome of the predicted action. In insects, efference copies are known to modulate optic flow detection for flight control in fruit flies. Much less is known, however, about possible feedforward control in other insects. Here we investigated whether feedforward control occurs in the detection of horizontal optic flow in walking ants, and how the latter is integrated to modulate their locomotion. We mounted Cataglyphis velox ants within a virtual reality set-up, allowing us to manipulate the relationship between the ant's movements and the optic flow it perceives. Results show that ants do compute a prediction error by making the difference between the expected optic flow according to their own movements and the one it perceived. Interestingly, this prediction does not control locomotion directly, but modulates the ant's intrinsic oscillator, which produces continuous alternations between right and left turns. What's more, we show that the prediction also involves proprioceptive feedback, and is additionally modulated by the visual structure of the surrounding panorama in a functional way. Finally, prediction errors stemming from both eyes are integrated before modulating the oscillator, providing redundancy and robustness to the system. Overall, our study reveals that ants compute robust predictions of the optic flow they should receive using a distributed mechanism integrating feedforwards, feedbacks as well as innate information about the structure of the world, that control their locomotion through oscillations.
Article
Retinal prostheses are a promising means for restoring sight to patients blinded by photoreceptor atrophy. They introduce visual information by electrical stimulation of the surviving inner retinal neurons. Subretinal implants target the graded-response secondary neurons, primarily the bipolar cells, which then transfer the information to the ganglion cells via the retinal neural network. Therefore, many features of natural retinal signal processing can be preserved in this approach if the inner retinal network is retained. Epiretinal implants stimulate primarily the ganglion cells, and hence should encode the visual information in spiking patterns, which, ideally, should match the target cell types. Currently, subretinal arrays are being developed primarily for restoration of central vision in patients impaired by age-related macular degeneration (AMD), while epiretinal implants-for patients blinded by retinitis pigmentosa, where the inner retina is less preserved. This review describes the concepts and technologies, preclinical characterization of prosthetic vision and clinical outcomes, and provides a glimpse into future developments.
Thesis
The progressive loss of photoreceptors in patients with retinitis pigmentosa and other retinal degenerative diseases is a major cause of blindness. Strides have been made recently in applying optogenetics to restore vision in animal models by targeting opsin expression to remaining photoreceptors, intermediate layer bipolar cells, and output layer retinal ganglion cells. In later stages of the disease, few photoreceptors remain, and targeting ganglion cells limits the normal visual processing that can be preserved. For example, when an opsin is expressed in ganglion cells, OFF ganglion cells, sensitive to light decrease in the normal retina, will switch polarity and become ON, i.e. sensitive to light increase. Current vision restoration strategies thus only restore a limited amount of retinal processing. Here we present a vision restoration strategy where we target the AII amacrine cell. This interneuron connects the On and Off visual pathways through both sign-preserving and sign-inverting synapses. Our results, from ex vivo mouse retina multielectrode array recordings, show that optogenetic stimulation of AII amacrine cells can generate both ON and OFF ganglion cell responses in both normal and degenerated retinas. By comparing responses to normal light stimulation with responses to optogenetic stimulation, we found that the majority of ganglion cells, responsive to both stimuli, maintained their polarity between the two conditions, suggesting that similar pathways are activated in normal and optogenetic stimulation. We show that this strategy also allows restoring the diversity of ganglion cell responses, beyond their ON-OFF nature. These results indicate that the AII could be a useful target for vision restoration in the future.
Article
In most sensory modalities, neuronal connectivity reflects behaviorally relevant stimulus features, such as spatial location, orientation, and sound frequency. By contrast, the prevailing view in the olfactory cortex, based on the reconstruction of dozens of neurons, is that connectivity is random. Here, we used high-throughput sequencing-based neuroanatomical techniques to analyze the projections of 5,309 mouse olfactory bulb and 30,433 piriform cortex output neurons at single-cell resolution. Surprisingly, statistical analysis of this much larger dataset revealed that the olfactory cortex connectivity is spatially structured. Single olfactory bulb neurons targeting a particular location along the anterior-posterior axis of piriform cortex also project to matched, functionally distinct, extra-piriform targets. Moreover, single neurons from the targeted piriform locus also project to the same matched extra-piriform targets, forming triadic circuit motifs. Thus, as in other sensory modalities, olfactory information is routed at early stages of processing to functionally diverse targets in a coordinated manner.
Article
Full-text available
Antagonistic interactions between center and surround receptive field (RF) components lie at the heart of the computations performed in the visual system. Circularly symmetric center-surround RFs are thought to enhance responses to spatial contrasts (i.e., edges), but how visual edges affect motion processing is unclear. Here, we addressed this question in retinal bipolar cells, the first visual neuron with classic center-surround interactions. We found that bipolar glutamate release emphasizes objects that emerge in the RF; their responses to continuous motion are smaller, slower, and cannot be predicted by signals elicited by stationary stimuli. In our hands, the alteration in signal dynamics induced by novel objects was more pronounced than edge enhancement and could be explained by priming of RF surround during continuous motion. These findings echo the salience of human visual perception and demonstrate an unappreciated capacity of the center-surround architecture to facilitate novel object detection and dynamic signal representation.
Article
Dendritic computations have a central role in neuronal function, but it is unknown how cell-class heterogeneity of dendritic electrical excitability shapes physiologically engaged neuronal and circuit computations. To address this, we examined dendritic integration in closely related classes of retinal ganglion cells (GCs) using simultaneous somato-dendritic electrical recording techniques in a functionally intact circuit. Simultaneous recordings revealed sustained OFF-GCs generated powerful dendritic spikes in response to visual input that drove action potential firing. In contrast, the dendrites of transient OFF-GCs were passive and did not generate dendritic spikes. Dendritic spike generation allowed sustained, but not transient, OFF-GCs to signal into action potential output the local motion of visual stimuli to produce a continuous wave of action potential firing in adjacent cells as images moved across the retina. Conversely, this representation was highly fragmented in transient OFF-GCs. Thus, a heterogeneity of dendritic excitability defines the computations executed by classes of GCs.
Article
Full-text available
We examined the morphology and physiological response properties of the axon-bearing, long-range amacrine cells in the rabbit retina. These so-called polyaxonal amacrine cells all displayed two distinct systems of processes: (1) a dendritic field composed of highly branched and relatively thick processes and (2) a more extended, often sparsely branched axonal arbor derived from multiple thin axons emitted from the soma or dendritic branches. However, we distinguished six morphological types of polyaxonal cells based on differences in the fine details of their soma/dendritic/axonal architecture, level of stratification within the inner plexiform layer (IPL), and tracer coupling patterns. These morphological types also showed clear differences in their light-evoked response activity. Three of the polyaxonal amacrine cell types showed on-off responses, whereas the remaining cells showed on-center responses; we did not encounter polyaxonal cells with off-center physiology. Polyaxonal cells respected the on/off sublamination scheme in that on-off cells maintained dendritic/axonal processes in both sublamina a and b of the IPL, whereas processes of on-center cells were restricted to sublamina b. All polyaxonal amacrine cell types displayed large somatic action potentials, but we found no evidence for low-amplitude dendritic spikes that have been reported for other classes of amacrine cell. The center-receptive fields of the polyaxonal cells were comparable to the diameter of their respective dendritic arbors and, thus, were significantly smaller than their extensive axonal fields. This correspondence between receptive and dendritic field size was seen even for cells showing extensive homotypic and/or heterotypic tracer coupling to neighboring neurons. These data suggest that all polyaxonal amacrine cells are polarized functionally into receptive dendritic and transmitting axonal zones. J. Comp. Neurol. 440:109–125, 2001. © 2001 Wiley-Liss, Inc.
Article
Full-text available
Retinal ganglion cells of the Y type in the cat retina produce two different types of response: linear and nonlinear. The nonlinear responses are generated by a separate and independent nonlinear pathway. The functional connectivity in this pathway is analyzed here by comparing the observed second-order frequency responses of Y cells with predictions of a "sandwich model" in which a static nonlinear stage is sandwiched between two linear filters. The model agrees well with the qualitative and quantitative features of the second-order responses. The prefilter in the model may well be the bipolar cells and the nonlinearity and postfilter in the model are probably associated with amacrine cells.
Article
Full-text available
The early stages of primate visual processing appear to be divided up into several component parts so that, for example, colour, form and motion are analysed by anatomically distinct streams. We have found that further subspecialization occurs within the motion processing stream. Neurons representing two different kinds of information about visual motion are segregated in columnar fashion within the middle temporal area of the owl monkey. These columns can be distinguished by labelling with 2-deoxyglucose in response to large-field random-dot patterns. Neurons in lightly labelled interbands have receptive fields with antagonistic surrounds: the response to a centrally placed moving stimulus is suppressed by motion in the surround. Neurons in more densely labelled bands have surrounds that reinforce the centre response so that they integrate motion cues over large areas of the visual field. Interband cells carry information about local motion contrast that may be used to detect motion boundaries or to indicate retinal slip during visual tracking. Band cells encode information about global motion that might be useful for orienting the animal in its environment.
Article
Full-text available
The neural circuitry underlying movement detection was inferred from studies of amacrine cells under whole-cell patch clamp in retinal slices. Cells were identified by Lucifer yellow staining. Synaptic inputs were driven by “puffing“ transmitter substances at the dendrites of presynaptic cells. Spatial sensitivity profiles for amacrine cells were measured by puffing transmitter substances along the lateral spread of their processes. Synaptic pathways were separated and identified with appropriate pre- and postsynaptic pharmacological blocking agents. Two distinct amacrine cell types were found: one with narrow spread of processes that sustained excitatory synaptic current, the other with very wide spread of processes that transient excitatory synaptic currents. The transient currents found only in the wide-field amacrine cell were formed presynaptically at GABA B receptors. They could be blocked with baclofen, a GABA B agonist, and their time course was extended by AVA, a GABA B antagonist. Baclofen and AVA had no direct affect upon the wide-field amacrine cell, but picrotoxin blocked a separate, direct GABA input to this cell. The narrow-field amacrine cell was shown to be GABAergic by counterstaining with anti-GABA antiserum after it was filled with Lucifer yellow. Its narrow, spatial profile and sustained synaptic input are properties that closely match those of the GABAergic antagonistic signal that forms transient activity (described above), suggesting that the narrow-field amacrine cell itself is the source of the GABAergic interaction mediating transient activity in the inner plexiform layer (IPL). Other work has shown a GABA B sensitivity at some bipolar terminals, suggesting a population of bipolars as the probable site of interaction mediating transient action. The results suggest that two local populations of amacrine cell types (sustained and transient) interact with the two populations of bipolar cell types (transient forming and nontransient forming). These interactions underlie the formation of the change-detecting subunits. We suggest that local populations of these subunits converge to form the receptive fields of movement-detecting ganglion cells.
Article
Full-text available
Previously, we discovered that the broadband cells in the two magnocellular (large cell) layers of the monkey lateral geniculate nucleus (LGN) are much more sensitive to luminance contrast than are the color-sensitive cells in the four parvocellular (small cell) layers. We now report that this large difference in contrast sensitivity is due not to LGN circuitry but to differences in sensitivity of the retinal ganglion cells that provide excitatory synaptic input to the LGN neurons. This means that the parallel analysis of color and luminance in the visual scene begins in the retina, probably at a retinal site distal to the ganglion cells.
Article
Full-text available
Study of parallel processing in the visual pathway1 of the cat has revealed several classes of retinal ganglion cells which are physiologically distinct and which project to various locations in the brain2,3. Two classes have been studied most extensively: X cells, which sum neural signals linearly over their receptive fields, and Y cells, in which the spatial summation is nonlinear1,4. In the cat's lateral geniculate nucleus (LGN) cells also can be classified as X or Y, a result of the parallel projection of retinal X and Y inputs to different geniculate neurones5-9. We report here our study of parallel signal processing in the LGN of the macaque monkey. We find that (1) monkey LGN cells can be classified as X or Y on the basis of spatial summation; (2) X-like cells are found in the four parvocellular and the two magnocellular laminae, whereas Y-like cells are found almost exclusively in the magnocellular laminae; and (3) the cells of the magnocellular laminae have high sensitivity and the parvocellular cells low sensitivity for homochromatic patterns. This implies that in macaque monkeys the magnocellular cells and their cortical projections may be the neural vehicle for contrast vision near threshold. The cells of the parvocellular laminae seem to be primarily concerned with wavelength discrimination and patterns of colour. As the human visual system is similar to that of the macaque in structure and behavioural performance, our findings are probably also applicable to man.
Article
Full-text available
It has been known for more than 40 years that images fade from perception when they are kept at the same position on the retina by abrogating eye movements. Although aspects of this phenomenon were described earlier, the use of close-fitting contact lenses in the 1950s made possible a series of detailed observations on eye movements and visual continuity. In the intervening decades, many investigators have studied the role of image motion on visual perception. Although several controversies remain, it is clear that images deteriorate and in some cases disappear following stabilization; eye movements are, therefore, essential to sustained exoptic vision. The time course of image degradation has generally been reported to be a few seconds to a minute or more, depending upon the conditions. Here we show that images of entoptic vascular shadows can disappear in less than 80 msec. The rapid vanishing of these images implies an active mechanism of image erasure and creation as the basis of normal visual processing.
Article
Full-text available
Transient lateral inhibition (TLI), the suppression of responses of a ganglion cell to light stimuli in the receptive field center by changes in illumination in the receptive field surround, was studied in light-adapted mud puppy and tiger salamander retinas using both eyecup and retinal slice preparations. In the eyecup, TLI was measured in on-off ganglion cells as the ability of rotating, concentric windmill patterns of 500-1200 micron inner diameter to suppress the response to a small spot stimulus in the receptive field center. Both the suppression of the spot response and the hyperpolarization produced in ganglion cells by rotation of the windmill were blocked in the presence of 2 microM strychnine or 500 nM tetrodotoxin (TTX), but not by 150 microM picrotoxin. In the slice preparation in which GABA-mediated currents were blocked with picrotoxin, IPSCs elicited by diffuse illumination were blocked by strychnine and strongly reduced by TTX. The TTX-resistant component was probably attributable to illumination of the receptive field center. TTX had a much greater effect in reducing the glycinergic inhibition elicited by laterally displaced stimulation versus nearby focal electrical stimulation. Strychnine enhanced light-evoked excitatory currents in ganglion cells, but this was not mimicked by TTX. The results suggest that local glycinergic transient inhibition does not require action potentials and is mediated by synapses onto both ganglion cell dendrites and bipolar cell terminals. In contrast, the lateral spread of this inhibition (at least over distances >250 micron) requires action potentials and is mainly onto ganglion cell dendrites.
Article
Full-text available
We correlated the morphology of salamander bipolar cells with characteristics of their light responses, recorded under voltage-clamp conditions. Twelve types of bipolar cells were identified, each displaying a unique morphology and level(s) of axon terminal stratification in the inner plexiform layer (IPL) and exhibiting light responses that differed with respect to polarity, kinetics, the relative strengths of rod and cone inputs, and characteristics of spontaneous EPSCs (sEPSCs) and IPSCs. In addition to the well known segregation of visual information into ON and OFF channels along the depth of the IPL, we found an overlying mapping of spectral information in this same dimension, with cone signals being transmitted predominantly to the central IPL and rod signals being sent predominantly to the margins of the IPL. The kinetics of bipolar cell responses correlated with this segregation of ON and OFF and of rod and cone information in the IPL. At light offset, rod-dominated cells displayed larger slow cationic current tails and smaller rapid overshoot responses than did cone-dominated cells. sEPSCs were generally absent in depolarizing bipolar cells but present in all hyperpolarizing bipolar cells (HBCs) and larger in rod-dominated HBCs than in cone-dominated HBCs. Inhibitory chloride currents, elicited both at light onset and light offset, tended to be larger for cone-dominated cells than for rod-dominated cells. This orderly segregation of visual signals along the depth of the IPL simplifies the integration of visual information in the retina, and it begins a chain of parallel processing in the visual system.
Article
Full-text available
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
Article
Full-text available
The receptive field of the Y-ganglion cell comprises two excitatory mechanisms: one integrates linearly over a narrow field, and the other integrates nonlinearly over a wide field. The linear mechanism has been attributed to input from bipolar cells, and the nonlinear mechanism has been attributed to input from a class of amacrine cells whose nonlinear "subunits" extend across the linear receptive field and beyond. However, the central component of the nonlinear mechanism could in theory be driven by bipolar input if that input were rectified. Recording intracellularly from the Y-cell in guinea pig retina, we blocked the peripheral component of the nonlinear mechanism with tetrodotoxin and found the remaining nonlinear receptive field to be precisely co-spatial with the central component of the linear receptive field. Both linear and nonlinear mechanisms were caused by an excitatory postsynaptic potential that reversed near 0 mV. The nonlinear mechanism depended neither on acetylcholine nor on feedback involving GABA or glycine. Thus the central components of the ganglion cell's linear and nonlinear mechanisms are apparently driven by synapses from the same rectifying bipolar cell.
Article
Full-text available
We examined the morphology and physiological response properties of the axon-bearing, long-range amacrine cells in the rabbit retina. These so-called polyaxonal amacrine cells all displayed two distinct systems of processes: (1) a dendritic field composed of highly branched and relatively thick processes and (2) a more extended, often sparsely branched axonal arbor derived from multiple thin axons emitted from the soma or dendritic branches. However, we distinguished six morphological types of polyaxonal cells based on differences in the fine details of their soma/dendritic/axonal architecture, level of stratification within the inner plexiform layer (IPL), and tracer coupling patterns. These morphological types also showed clear differences in their light-evoked response activity. Three of the polyaxonal amacrine cell types showed on-off responses, whereas the remaining cells showed on-center responses; we did not encounter polyaxonal cells with off-center physiology. Polyaxonal cells respected the on/off sublamination scheme in that on-off cells maintained dendritic/axonal processes in both sublamina a and b of the IPL, whereas processes of on-center cells were restricted to sublamina b. All polyaxonal amacrine cell types displayed large somatic action potentials, but we found no evidence for low-amplitude dendritic spikes that have been reported for other classes of amacrine cell. The center-receptive fields of the polyaxonal cells were comparable to the diameter of their respective dendritic arbors and, thus, were significantly smaller than their extensive axonal fields. This correspondence between receptive and dendritic field size was seen even for cells showing extensive homotypic and/or heterotypic tracer coupling to neighboring neurons. These data suggest that all polyaxonal amacrine cells are polarized functionally into receptive dendritic and transmitting axonal zones.
Article
Full-text available
Image movements relative to the retina are essential for the visual perception of stationary objects during fixation. Here we have measured fixational eye and head movements of the turtle, and determined their effects on the activity of retinal ganglion cells by simulating the movements on the isolated retina. We show that ganglion cells respond mainly to components of periodic eye movement that have amplitudes of roughly the diameter of a photoreceptor. Drift or small head movements have little effect. Driven cells that are located along contrast borders are synchronized, which reliably signals a preceding movement. In an artificial neural network, the estimation of spatial frequencies for various square wave gratings improves when timelocked to this synchronization. This could potentially improve stimulus feature estimation by the brain.
Article
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
Article
A new and remarkable type of amacrine cell has been identified in the primate retina. Application of the vital dye acridine orange to macaque retinas maintained in vitro produced a stable fluorescence in the somata of apparently all retinal neurons in both the inner nuclear and ganglion cell layers. Large somata (∼15‐20 μm diam) were also consistently observed in the approximate center of the inner plexiform layer (IPL). Intracellular injections of horseradish peroxidase (HRP) made under direct microscopic control showed that the cells in the middle of the IPL constitute a single, morphologically distinct amacrine cell subpopulation. An unusual and characteristic feature of this cell type is the presence of multiple axons that arise from the dendritic tree and project beyond it to form a second, morphologically distinct arborization within the IPL; these cells have thus been referred to as axon‐bearing amacrine cells. The dendritic tree of the axon‐bearing amacrine cell is highly branched (∼40‐50 terminal dendrites) and broadly stratified, spanning the central 50% of the IPL so that the soma is situated between the outermost and innermost branches. Dendritic field size increases from ∼200 μm in diameter within 2 mm of the fovea to ∼500 μm in the retinal periphery. HRP injections of groups of neighboring cells revealed a regular intercell spacing (∼200‐300 μm in the retinal periphery), suggesting that dendritic territories uniformly cover the retina. One to four axons originate from the proximal dendrites as thin (<0.5 μm). smooth processes. The axons increase in diameter (∼1‐2 μm) as they course beyond the dendritic field and bifurcate once or twice into secondary branches. These branches give rise to a number of thin, bouton‐bearing collaterals that extend radially from the dendritic tree for1‐3 mm without much further branching. The result is a sparsely branched and widely spreading axonal tree that concentrically surrounds the smaller, more highly branched dendritic tree. The axonal tree is narrowly stratified over the central 0‐20% of the IPL; it is approximately ten times the diameter of the dendritic tree, resulting in a 100 times greater coverage factor. The clear division of an amacrine cell's processes into distinct dendritic and axonal components has recently been observed in other, morphologically distinct amacrine cell types of the cat and monkey retina and therefore represents a property common to a number of functionally distinct cell types. It is hypothesized that the axon‐bearing amacrine cells, like classical neurons, use action potentials to transmit signals over long distances in the IPL and, on the basis of previous immunohistochemical results, contain the inhibitory neurotransmitter GABA.
Article
We frequently reposition our gaze by making rapid ballistic eye movements that are called saccades. Saccades pose problems for the visual system, because they generate rapid, large-field motion on the retina and change the relationship between the object position in external space and the image position on the retina. The brain must ignore the one and compensate for the other. Much progress has been made in recent years in understanding the effects of saccades on visual function and elucidating the mechanisms responsible for them. Evidence suggests that saccades trigger two distinct neural processes: (1) a suppression of visual sensitivity, specific to the magnocellular pathway, that dampens the sensation of motion and (2) a gross perceptual distortion of visual space in anticipation of the repositioning of gaze. Neurophysiological findings from several laboratories are beginning to identify the neural substrates involved in these effects.
Article
1. We studied how responses to visual stimuli at spatially separated locations were combined by cat retinal ganglion cells. 2. The temporal signal which modulated the stimuli was a sum of sinusoids. Fourier analysis of the ganglion cell impulse train yielded first order responses at the modulation frequencies, and second order responses at sums and differences of the input frequencies. 3. Spatial stimuli were spots in the centre and periphery of the cell's receptive field. Four conditions of stimulation were used: centre alone, periphery alone, centre and periphery in phase, centre and periphery out of phase. 4. The effective first order response of the centre was defined as the response due to centre stimulation in the presence of periphery stimulation, but independent of the relative phases of the two regions. Likewise, the effective first order response of the periphery was defined as the response due to periphery in the presence of centre stimulation, but independent of the relative phases of the two regions. These effective responses may be calculated by addition and subtraction of the measured responses to the combined stimuli. 5. There was a consistent difference between the first order frequency kernal of the effective centre and the first order kernel of the centre alone. The amplitudes of the effective centre responses were diminished at low frequencies of modulation compared to the isolated centre responses. Also, the phase of the effective centre's response to high frequencies was advanced. Such non-linear interaction occurred in all ganglion cells, X or Y, but the effects were larger in Y cells. 6. In addition to spatially uniform stimuli in the periphery, spatial grating patterns were also used. These peripheral gratings affected the first order kernal of the centre even though the peripheral gratings produced no first order responses by themselves. 7. The temporal properties of the non-linear interaction of centre and periphery were probed by modulation in the periphery with single sinusoids. The most effective temporal frequencies for producing non-linear summation were: (a) 4-15 Hz when all the visual stimuli were spatially uniform, (b) 2-8 Hz when spatial grating patterns were used in the periphery. 8. The characteristics of non-linear spatial summation observed in these experiments are explained by the properties of the contrast gain control mechanism which we have previously postulated.
Article
Two-dimensional head rotations recorded from 2 subjects sitting still, without artificial head support, showed appreciable movement over the frequency range d.c. to 7 Hz. Capacity of vestibuloocular reflex and visually guided eye movements to null motion over this dynamic range was examined by simultaneously recording 2-dimensional head and eye rotations while sinusoidally rotating subjects over the frequency range 0.1 to 15 Hz using small amplitudes. At best, oculomotor compensation removed about 90% of head motion from eye motion in space. Representative compensation was poorer. Compensation for natural motions of unsupported heads while sitting and standing was also incomplete resulting in substantially more eye motion in space than was observed with head supported.These observations, coupled with recent demonstrations of plasticity of the vestibulo-ocular reflex, led us to suggest that the degree of compensatory oculomotor response is actively adjusted downwards so as to guarantee sufficient retinal image motion to prevent perceptual fading when the body is relatively stationary and is actively adjusted upwards, so as to guarantee sufficient retinal stability to prevent perceptual blurring when the body moves actively. Seen this way. the goal of oculomotor compensation is not retinal image stabilization, but rather controlled retinal image motion adjusted so as to be optimal for visual processing over the full range of natural motions of the body.
Article
1. The mechanism which makes Y cells different from X cells was investigated. 2. Spatial frequency contrast sensitivity functions for the fundamental and second harmonic responses of Y cells to alternating phase gratings were determined. 3. The fundamental spatial frequency response was predicted by the Fourier transform of the sensitivity profile of the Y cell. The high spatial frequency cut-off of a Y cell's fundamental response was in this way related to the centre of the cell's receptive field. 4. The second harmonic response of a Y cell did not cut off at such a low spatial frequency as the fundamental response. This result indicated that the source of the second harmonic was a spatial subunit of the receptive field smaller in spatial extent than the centre. 5. Contrast sensitivity vs. spatial phase for a Y cell was measured under three conditions: a full grating, a grating seen through a centrally located window, a grating partially obscured by a visual shutter. The 2nd/1st harmonic sensitivity ratio went down with the window and up with the shutter. These results implied that the centre of Y cells was linear and also that the nonlinear subunits extended into the receptive field surround. 6. Spatial localization of the nonlinear subunits was determined by means of a spatial dipole stimulus. The nonlinear subunits overlapped the centre and surround of the receptive field and extended beyond both. 7. The nature of the Y cell nonlinearity was found to be rectification, as determined from measurements of the second harmonic response as a function of contrast. 8. Spatial models for the Y cell receptive field are proposed.
Article
Type 1 polyaxonal (PA1) amacrine cells have been identified previously in rabbit retina, and their morphological characteristics have been described in detail in the preceeding paper. Like other polyaxonal amacrine cells they bear distinct dendritic and axonal branching systems, the latter of which originates in two to six thin, branching axons which emerge from or near to the cell body. Unlike other types of polyaxonal amacrine cells, however, their branching is stratified at the a/b sublaminar border and their cell bodies are most often displaced interstitially in the inner plexiform layer (IPL). This report emphasizes quantitative features of the population of PA1 cells, documented in Golgi‐impregnated and Nissl‐stained retinas, and provides further evidence in Nissl preparations for the amacrine‐cell nature of polyaxonal amacrine cells. The cell bodies of Golgi‐impregnated PA1 amacrine cells are relatively large: 12–15 μm in equivalent diameter over the range extending from the visual streak 6 mm into ventral retina. Over the same range, dendritic trees are 400–800 μm in equivalent diameter, but they are much smaller than the axonal arborizations, which extend up to and perhaps beyond 2 mm from the cell body. Interstitial cell bodies appropriate to PA1 cells have been identified in Nissl‐stained, whole‐mounted rabbit retinas. In the plane of the retina, these are comparable in area to smaller medium‐size ganglion cells, but their very pale Nissl staining, high nuclear/cytoplasmic ratio, and absence of nucleolar staining are all characteristics of amacrine cells. Interstitial displacement of presumed PA1 cells is rare in the visual streak, and the frequency of interstitial cells reaches a peak between 1 and 2 mm ventral to the streak. Counts in Nissl‐stained retinas and estimates from nearest neighbor analyses in these and in Golgi‐impregnated retinas indicate a density of PA1 cells in the range of 15–16 cells/mm ² at about 2 mm ventral to the streak, when an estimated 25% shrinkage of the material is taken into account. Dendritic field overlap, based upon this estimate, is calculated to be about fourfold, while a lower bound to estimates of the overlap of axonal arborizations is nearly an order of magnitude higher. Many similarities are noted in a qualitative and quantitative comparison of PA1 amacrine cells in rabbit and monkey retinas. In assessing the contribution of the structural organization of PA1 amacrine cells to their possible functional role(s), it is notable that their appearance conforms not to amacrine cells as commonly viewed, but to a more conventional model of neuronal dynamic polarization. PA1 amacrine cells have a dendritic tree of limited extent and an axonal arborization capable of influencing a wide retinal area. This structural arrangement could operate to transmit signals generated by focal stimuli impinging upon the dendritic tree to a wide surrounding area of retina where they could modulate these distal regions primarily. A role is suggested for PA1 cells in the retinal mechanisms of neural adaptation.
Article
1. The stability of gaze was measured in nine normal subjects during 30-s epochs of standing, walking in place, and running in place. The angle of gaze and head rotations in horizontal and vertical planes were measured using the magnetic search coil technique. Subjects visually fixed on a stationary object located at a distance of 100 m; thus measurements of gaze indicated the stability of images on the retina. 2. During standing, walking, or running in place, the standard deviation of the angle of gaze was less than 0.4 degrees, both horizontally and vertically. During standing and walking in place, peak gaze velocity (Gp) was less than 3.0 degrees/s. During running in place, Gp was less than 3.0 degrees/s horizontally but ranged up to 9.3 degrees/s vertically. 3. Visual acuity was measured during standing, walking, and running in place. During walking in place, five of nine subjects showed a small but significant (P = 0.03) decline in visual acuity compared with standing. During running in place, all nine subjects showed a small but significant (P = 0.002) decline in visual acuity compared with standing. 4. Stability of gaze was also measured during vigorous, voluntary head rotations in the horizontal (yaw) or vertical (pitch) planes, for 15-s epochs. Gp ranged as high as 70 degrees/s horizontally and 41 degrees/s vertically. All subjects reported illusory movement of the seen environment during these head rotations. 5. The suitability of linear systems techniques for analysis of the horizontal and vertical vestibuloocular reflex (VOR) during walking and running in place was assessed using coherence spectral analysis.(ABSTRACT TRUNCATED AT 250 WORDS)
Article
We have studied the properties of neurones in the lateral geniculate nucleus (l.g.n.) of Old World monkeys, both in mature animals and throughout post-natal development. Cells were classified as X (linear) or Y (non-linear) on the basis of their responses to contrast-reversing achromatic gratings ('null position test'). In older animals virtually all parvocellular neurones and the majority of magnocellular units were X cells; only about 15% of magnocellular neurones displayed highly non-linear spatial summation, with no 'null position', typical of Y cells. X cells could not reliably be distinguished from Y cells, nor magnocellular from parvocellular, on the basis of their temporal patterns of discharge. Some Y cells responded transiently to contrast reversal of a grating far from the receptive field but X cells showed little or no such 'shift effect'. The spatial resolution of mature l.g.n. cells varied with the eccentricity of their receptive fields such that the best of them, at each point in the visual field, resolved drifting achromatic gratings about as well as a human observer. X cells in parvocellular and magnocellular layers had similar 'acuities', even in the central foveal representation, but Y cells generally had poorer resolution. Receptive fields in the temporal retina tended to have lower resolution than those at comparable eccentricities in the nasal retina. Even on the day of birth all cells we studied responded to visual stimulation and virtually all could be classified as X or Y. The laminar distribution of cell types and the general morphological appearance of the nucleus seemed very similar to those in the adult, but neurones in very young animals had low spontaneous activity, sluggish responses, and latencies to visual stimulation longer than any we saw in the adult. Until 3 weeks of age or so, many neurones suffered cumulative 'fatigue' when visually stimulated over several minutes. Visual latency was essentially mature by about 10 weeks. In the l.g.n. of the neonatal monkey there was little variation in neuronal 'acuity' with eccentricity: even in the foveal area the best cells could resolve only about 5 cycles/deg. Over the first year or more of life there is a gradual increase in responsiveness and about a 7-fold improvement in spatial resolution for foveal l.g.n. cells, correlating roughly with the behavioural maturation of visual acuity.
Article
Neurons were recorded in the superficial layers of the superior colliculus in anesthetized monkeys. As classically described, cells were non-selective for target direction and speed when the target moved through an empty visual field. However, these same cells were sensitive to target direction and speed relative to a textured moving background. The target's response was suppressed when its direction and speed were similar to that of the background, irrespective of the absolute direction of background movement.
Article
Lateral interactions at the inner plexiform layer of the retina of the mudpuppy were studied intracellularly after they were isolated from interactions at the outer plexiform layer with a special stimulus. The isolation was confirmed by recording no surround effect at bipolar cells under conditions that elicited a strong surround effect at ganglion cells. It appears that amacrine cell, which respond to spatiotemporal change at one retinal region, inhibit the response to change in on-off ganglion cells at adjacent sites.
Article
Eye and head movements in the horizontal, frontal and sagittal planes were recorded in the rabbit with a newly developed technique using dual scleral search coils in a rotating magnetic field. The compensatory eye movements elicited by passive sinusoidal oscillation deteriorated for frequencies below 0.1 Hz in the horizontal, but not in the frontal and sagittal planes. In the light gain was relatively independent of frequency in all planes and amounted to 0.82-0.69, 0.92-0.83 and 0.65-0.59 in the horizontal, frontal and sagittal plane, respectively. In freely moving animals, similar input-output relations were found. The stability of the retinal image thus proved to be inversely proportional to the amount of head movements associated with behavioural activity. Maximal retinal image velocities varied between 2-4 degree/s for a rabbit sitting quietly and 30-40 degrees/s during locomotor activity. Gaze displacements showed different characteristics in the various planes, possibly in relation with the structure of the retinal visual streak. Horizontal gaze changes were mainly effected by saccades. Gaze changes in the frontal plane were relatively rare and effected by non-saccadic, combined head and eye movements with temporary suppression of compensatory eye movements. Eye rotations in the sagittal plane, possibly functioning to adjust the direction of binocular vision vertically, were abundant and effected by large head movements in combination with a low gain of compensatory eye movements in this plane.
Article
Cells in intermediate and deeper layers of the pigeon optic tectum respond best when a textured background pattern is moved in the opposite direction to a moving test spot. Complete inhibition occurs when the background moves in the same direction as the test stimulus. Most noteworthy is the invariance of this relationship over a wide range of test spot directions. These cells represent a higher level of abstraction in a motion-detecting system and may play a role in figure-ground segregation or the discrimination of the motion of an object from self-induced optical motion.
Article
Responses of superficial-layer, texture-sensitive complex cells in cat striate cortex to relative motion between an oriented bar stimulus and its textured background were recorded. Some cells responded best to motion in one particular direction across the receptive field of the cell, irrespective of whether the bar and background moved simultaneously in the same (in-phase) or opposite (antiphase) directions. Others showed a clear preference for either in-phase or antiphase relative motion, irrespective of direction of motion across the receptive field.
Article
1. Action potentials were recorded from single fibres in the optic tract of anaesthetized cats. 2. A sectored disk or 'windmill', concentric with the receptive field, was rotated about its centre to cause local changes in illumination throughout the receptive field without changing the total amount of light falling on the receptive field centre or surround. 3. A cell's response to a flashing test spot centered on its receptive field was measured both while the windmill was stationary and while it rotated. While the windmill rotated, the test spot evoked a smaller average number of spikes than while the windmill was stationary. 4. The induction in response occurred in both on-centre and off-centre cells and in both X-cells and Y-cells, though the reduction in response was smaller in X-cells. 5. Surround responses, evoked by an eccentric stimulus, were also reduced by a moving peripheral pattern. 6. Suppression was graded with the contrast of the moving pattern. 7. Gratings too fine to be resolved by the receptive field centre could suppress the response of Y-cells. This suggests that the local elements responsible for the suppression are smaller than the receptive field centres of Y-cells. 8. Response suppression started within the 100 msec of the onset of pattern motion.
Article
Horizontal binocular eye and head movements of 4 human subjects were recorded by means of the sensor coil-rotating magnetic field technique while they actively rotated their heads about a vertical axis and maintained fixation on a distant target. The frequency and peak-to-peak amplitude of these rotations ranged from about 0.25 Hz to 5 Hz and 30° to 15′. Eye movement compensation of such head rotations was far from perfect and compensation was different in each eye. Average retinal image speed was on the order of 4 deg/sec within each eye and the speed of the changes in retinal image position between the eyes was on the order of 3 deg/sec. Vision, subjectively, remained fused, stable and clear. Attention is called to implications of these results for visual and oculomotor physiology.
Article
Throughout the central nervous system, information about the outside world is represented collectively by large groups of cells, often arranged in a series of 2-dimensional maps connected by tracts with many fibers. To understand how such a circuit encodes and processes information, one must simultaneously observe the signals carried by many of its cells. This article describes a new method for monitoring the simultaneous electrical activity of many neurons in a functioning piece of retina. Extracellular action potentials are recorded with a planar array of 61 microelectrodes, which provides a natural match to the flat mosaic of retinal ganglion cells. The voltage signals are processed in real time to extract the spike trains from up to 100 neurons. We also present a method of visual stimulation and data analysis that allows a rapid characterization of each neuron's visual response properties. A randomly flickering display is used to elicit spike trains from the ganglion cell population. Analysis of the correlations between each spike train and the flicker stimulus results in a simple description of each ganglion cell's functional properties. The combination of these tools will allow detailed study of how the population of optic nerve fibers encodes a visual scene.
Article
We characterized the light response, morphology, and receptive-field structure of a distinctive amacrine cell type (Dacey, 1989), termed here the A1 amacrine, by applying intracellular recording and staining methods to the macaque monkey retina in vitro. A1 cells show two morphologically distinct components: a highly branched and spiny dendritic tree, and a more sparsely branched axon-like tree that arises from one or more hillock-like structures near the soma and extends for several millimeters beyond the dendritic tree. Intracellular injection of Neurobiotin reveals an extensive and complex pattern of tracer coupling to neighboring A1 amacrine cells, to two other amacrine cell types, and to a single ganglion cell type. The A1 amacrine is an ON-OFF cell, showing a large (10-20 mV) transient depolarization at both onset and offset of a photopic, luminance modulated stimulus. A burst of fast, large-amplitude (approximately 60 mV) action potentials is associated with the depolarizations at both the ON and OFF phase of the response. No evidence was found for an inhibitory receptive-field surround. The spatial extent of the ON-OFF response was mapped by measuring the strength of the spike discharge and/or the amplitude of the depolarizing slow potential as a function of the position of a bar or spot of light within the receptive field. Receptive fields derived from the slow potential and associated spike discharge corresponded in size and shape. Thus, the amplitude of the slow potential above spike threshold was well encoded as spike frequency. The diameter of the receptive field determined from the spike discharge was approximately 10% larger than the spiny dendritic field. The correspondence in size between the spiking receptive field and the spiny dendritic tree suggests that light driven signals are conducted to the soma from the dendritic tree but not from the axon-like arbor. The function of the axon-like component is unknown but we speculate that it serves a classical output function, transmitting spikes distally from initiation sites near the soma.
Article
Decoding visual information from a population of retinal ganglion nal ganglion cells in transmitting visual information to the cells. J. Neurophysiol. 78: 2336–2350, 1997. This work investigates brain. How do the spike trains of optic nerve fibers convey how a time-dependent visual stimulus is encoded by the collective the visual scene projected on the retina? At this stage of the activity of many retinal ganglion cells. Multiple ganglion cell spike visual system, questions regarding the neural code can be trains were recorded simultaneously from the isolated retina of the tiger salamander using a multielectrode array. The stimulus consisted phrased and answered particularly precisely for the followof photopic, spatially uniform, temporally broadband flicker. From ing reasons: the ganglion cells are the only neurons transmit-the recorded spike trains, an estimate was obtained of the stimulus ting visual information to the brain; the only variable they intensity as a function of time. This was compared with the actual encode is the time-varying image on the retina; this stimulus stimulus to assess the quality and quantity of visual information con- can be controlled experimentally using well-developed tech-veyed by the ganglion cell population. Two algorithms were used to nology for generating images; finally, the activity of multiple decode the spike trains: an optimized linear filter in which each action retinal ganglion cells can be monitored experimentally withpotential