Charles M. Higgins’s research while affiliated with The University of Arizona and other places


Ad

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (34)


The two-stage network for visual submodality refinement and visual binding. Large circles represent units in the neural network. Unshaded half-circles at connections indicate excitation, and filled half-circles indicate inhibition. During the first phase of training, neurons in the three first-stage networks learn to mutually inhibit one another, thus refining the representation of motion, orientation, and color and resulting in more selective tunings in each submodality. In the second phase of training with a stimulus comprised of moving bars, the second-stage visual binding network learns the relative strengths of visual object features based on common temporal fluctuations and develops an inhibitory weight matrix to produce outputs that reflect the temporal fluctuations unique to each object
Diagram of wide-field visual input computation from input images. Each input RGB image was converted to grayscale (G) for orientation and motion processing. Image motion was computed by the Hassenstein–Reichardt (HR) elementary motion detection model in the horizontal (IH\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_H$$\end{document}) and vertical (IV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_V$$\end{document}) directions and then separated into four feature images Ileft\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_\mathrm{left}$$\end{document}, Iright\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_\mathrm{right}$$\end{document}, Idown\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_\mathrm{down}$$\end{document}, and Iup\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_\mathrm{up}$$\end{document}, respectively, containing strictly positive leftward (H<0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H < 0$$\end{document}), rightward (H>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H > 0$$\end{document}), downward (V<0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V < 0$$\end{document}), and upward (V>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V > 0$$\end{document}) components. Three orientation-selective DoG filter kernels were convolved with the grayscale image to produce three orientation feature images I0∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_{0^\circ }$$\end{document}, I60∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_{60^\circ }$$\end{document}, and I120∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_{120^\circ }$$\end{document}. The individual red, green, and blue color planes were taken as the final three feature images Ired\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_\mathrm{red}$$\end{document}, Igrn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_\mathrm{grn}$$\end{document}, and Iblu\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_\mathrm{blu}$$\end{document}. Each of these ten 2D feature images was then spatially summed to create a scalar value representing wide-field image feature content. The inputs i1(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i_1(t)$$\end{document} through i10(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i_{10}(t)$$\end{document} became input to the neural network model of Fig. 1
Outputs and weights of the second-stage network as it trained with a visual stimulus comprised of two bars moving through sinusoidal shadow. A legend to identify each trace in panels a and b is shown at upper left. a Network outputs when using the competitive learning rule of (5), in which the activation functions f() and g() are switched in position relative to conventional learning rules for BSS, and thus emphasize patterns of column over row weights. Note that over the time of training, the red and green outputs come to inhibit all others. b Network outputs when using the conventional cooperative learning rule, which emphasizes patterns of row over column weights. No pattern of outputs is evident apart from nearly uniform inhibition. c Final weight matrix T\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{T}}$$\end{document} using the competitive learning rule of (5). Brighter colors indicate stronger inhibition. The strongest weights are in the red and green columns, and clearly indicate the features of each bar. d Final weight matrix T\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{T}}$$\end{document} using the cooperative learning rule
Extraction of object-level information. a Simplified visual binding matrix Tsimp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{T}}^\mathrm{simp}$$\end{document}, from which object features can clearly be read. b Feature matrix O\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{O}}$$\end{document} extracted from Tsimp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{T}}^\mathrm{simp}$$\end{document} using (10). Each row of the object matrix corresponds to a unique object. The associated vector r\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{r}}$$\end{document} = [ 8 9 ] indicates that the first row of the object matrix corresponds to output 8 (red) and the second row to output 9 (green)
Computational diagram for creating an enhanced image. Thin lines indicate scalar quantities; thick lines represent matrices. Starting at figure left, each of the ten 2D feature images created in Fig. 2 is first processed through a first-order high-pass filter (HPF) using the same time constant τHI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau _{HI}$$\end{document} as was used for network inputs, and the absolute value of each matrix taken (ABS) so that both increases and decreases in features are represented in the output. Each of the resulting filtered matrices is then multiplied (Π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varPi }$$\end{document}) by the corresponding scalar weight fj\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_j$$\end{document} from (12). These weighted feature images are summed point-by-point (Σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varSigma }$$\end{document}) into a single RGB image, after which this image is normalized (NORM) by a scalar value corresponding to its maximum over all pixels and color planes. This results in an RGB “mask” that identifies where salient features of the input image exist. This mask is then multiplied point-by-point (Π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varPi }$$\end{document}) with the input RGB image, resulting in a 2D RGB enhanced image that emphasizes the most salient object. Note that motion and orientation feature images contribute equally to red, green, and blue color planes, but red, green, and blue feature images contribute only to their own color plane

+2

An insect-inspired model for visual binding II: functional analysis and visual attention
  • Article
  • Full-text available

April 2017

·

60 Reads

·

1 Citation

Biological Cybernetics

·

Charles M. Higgins

We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features—such as color, motion, and orientation—by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

Download

An insect-inspired model for visual binding I: learning objects and their characteristics

April 2017

·

101 Reads

·

1 Citation

Biological Cybernetics

Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.


Fig. 1. Recently fabricated digital PCB version in the Higgins laboratory.  
Fig. 3. The MDT4 template and the waveforms used to build it.  
Fig. 4. The whole set of templates to compare their shapes.  
Fig. 5. Test case for MDT3, DIT3, and MDT1. MDT3 spikes dominate the observation among few DIT3 and MDT1 ones. Other TSDN spikes are considered unexpected results (dot line).  
A visual motion detecting module for dragonfly-controlled robots

August 2014

·

154 Reads

·

6 Citations

When imitating biological sensors, we have not completely understood the early processing of the input to reproduce artificially. Building hybrid systems with both artificial and real biological components is a promising solution. For example, when a dragonfly is used as a living sensor, the early processing of visual information is performed fully in the brain of the dragonfly. The only significant remaining tasks are recording and processing neural signals in software and/or hardware. Based on existing works which focused on recording neural signals, this paper proposes a software application of neural information processing to design a visual processing module for dragonfly hybrid bio-robots. After a neural signal is recorded in real-time, the action potentials can be detected and matched with predefined templates to detect when and which descending neurons fire. The output of the proposed system will be used to control other parts of the robot platform.


Tracking improves performance of biological collision avoidance models

June 2012

·

8 Reads

·

5 Citations

Biological Cybernetics

Collision avoidance models derived from the study of insect brains do not perform universally well in practical collision scenarios, although the insects themselves may perform well in similar situations. In this article, we present a detailed simulation analysis of two well-known collision avoidance models and illustrate their limitations. In doing so, we present a novel continuous-time implementation of a neuronally based collision avoidance model. We then show that visual tracking can improve performance of these models by allowing an relative computation of the distance between the obstacle and the observer. We compare the results of simulations of the two models with and without tracking to show how tracking improves the ability of the model to detect an imminent collision. We present an implementation of one of these models processing imagery from a camera to show how it performs in real-world scenarios. These results suggest that insects may track looming objects with their gaze.


A neuronally based model of contrast gain adaptation in fly motion vision

August 2011

·

32 Reads

·

3 Citations

Visual Neuroscience

Motion-sensitive neurons in the visual systems of many species, including humans, exhibit a depression of motion responses immediately after being exposed to rapidly moving images. This motion adaptation has been extensively studied in flies, but a neuronal mechanism that explains the most prominent component of adaptation, which occurs regardless of the direction of motion of the visual stimulus, has yet to be proposed. We identify a neuronal mechanism, namely frequency-dependent synaptic depression, which explains a number of the features of adaptation in mammalian motion-sensitive neurons and use it to model fly motion adaptation. While synaptic depression has been studied mainly in spiking cells, we use the same principles to develop a simple model for depression in a graded synapse. By incorporating this synaptic model into a neuronally based model for elementary motion detection, along with the implementation of a center-surround spatial band-pass filtering stage that mimics the interactions among a subset of visual neurons, we show that we can predict with remarkable success most of the qualitative features of adaptation observed in electrophysiological experiments. Our results support the idea that diverse species share common computational principles for processing visual motion and suggest that such principles could be neuronally implemented in very similar ways.


Non-directional motion detectors can be used to mimic optic flow dependent behaviors

December 2010

·

22 Reads

·

8 Citations

Biological Cybernetics

Insect navigational behaviors including obstacle avoidance, grazing landings, and visual odometry are dependent on the ability to estimate flight speed based only on visual cues. In honeybees, this visual estimate of speed is largely independent of both the direction of motion and the spatial frequency content of the image. Electrophysiological recordings from the motion-sensitive cells believed to underlie these behaviors have long supported spatio-temporally tuned correlation-type models of visual motion detection whose speed tuning changes as the spatial frequency of a stimulus is varied. The result is an apparent conflict between behavioral experiments and the electrophysiological and modeling data. In this article, we demonstrate that conventional correlation-type models are sufficient to reproduce some of the speed-dependent behaviors observed in honeybees when square wave gratings are used, contrary to the theoretical predictions. However, these models fail to match the behavioral observations for sinusoidal stimuli. Instead, we show that non-directional motion detectors, which underlie the correlation-based computation of directional motion, can be used to mimic these same behaviors even when narrowband gratings are used. The existence of such non-directional motion detectors is supported both anatomically and electrophysiologically, and they have been hypothesized to be critical in the Dipteran elementary motion detector (EMD) circuit.


The spatial frequency tuning of optic-flow-dependent behaviors in the bumblebee Bombus impatiens

May 2010

·

31 Reads

·

62 Citations

Journal of Experimental Biology

Insects use visual estimates of flight speed for a variety of behaviors, including visual navigation, odometry, grazing landings and flight speed control, but the neuronal mechanisms underlying speed detection remain unknown. Although many models and theories have been proposed for how the brain extracts the angular speed of the retinal image, termed optic flow, we lack the detailed electrophysiological and behavioral data necessary to conclusively support any one model. One key property by which different models of motion detection can be differentiated is their spatiotemporal frequency tuning. Numerous studies have suggested that optic-flow-dependent behaviors are largely insensitive to the spatial frequency of a visual stimulus, but they have sampled only a narrow range of spatial frequencies, have not always used narrowband stimuli, and have yielded slightly different results between studies based on the behaviors being investigated. In this study, we present a detailed analysis of the spatial frequency dependence of the centering response in the bumblebee Bombus impatiens using sinusoidal and square wave patterns.




Winner-Take-All-Based Visual Motion Sensors

September 2006

·

23 Reads

·

8 Citations

IEEE Transactions on Circuits and Systems II: Express Briefs

We present a novel analog VLSI implementation of visual motion computation based on the lateral inhibition and positive feedback mechanisms that are inherent in the hysteretic winner-take-all circuit. By use of an input-dependent bias current and threshold mechanism, the circuit resets itself to prepare for another motion computation. This implementation was inspired by the Barlow-Levick model of direction selectivity in the rabbit retina. Each pixel uses 33 transistors and two small capacitors to detect the direction of motion and can be altered with the addition of six more transistors to measure the interpixel transit time. Simulation results and measurements from fabricated VLSI designs are presented to show the operation of the circuits


Ad

Citations (30)


... For instance, the feelSpace belt [7] delivers a sense of magnetic north, used for pedestrian navigation, and the FeltRadio [8] delivers a sense of radio waves present around the user. Applications are often within accessibility (e.g., [7,24,25,26]). Instead of presenting a physical property of the real world, Culbertson et al. [27] created an illusory force with a set of wearable vibration motors to direct the user's movements. ...

Reference:

Haptic Magnetism
A Navigation Aid for the Blind Using Tactile-Visual Sensory Substitution
  • Citing Conference Paper
  • August 2006

... The overall morphology of these neurons strongly resembles their Dipteran counterparts, with tree-like input arborizations within the lobula complex, and outputs in the lateral midbrain. However, as with this individual example, several neurons described in this study have their inputs originating solely from a deep neuropil on the anterior side of the lobula, similar to the "sublobula" identified in bees (Devoe et al., 1982;Strausfeld, 2005;Strausfeld et al., 2006), rather than from a posterior lobula plate. Until the homologies between these different lobula subregions with their counterparts in other insect groups are more clearly identified, we label these neurons more generally as lobula tangential cells (LTCs). ...

parallel processing in the optic lobes of flies and motion detecting circuites
  • Citing Chapter
  • October 2006

... As neuromorphic front end sensors for temporal contrast detection improved, Higgins [76] and Indiveri [71] both adopted multi-chip approaches to motion-estimation, relying on a stand-alone specialized front end sensor for temporal contrast detection, and a separate chip for the FS token algorithm implementation. The multi-chip approach carries the disadvantage of requiring additional power for off-chip communication between the front-end sensor and the motion computation chip. ...

A Modular MultiChip Neuromorphic Architecture for Real-Time Visual Motion Processing
  • Citing Article
  • September 2000

Analog Integrated Circuits and Signal Processing

... They proposed an Elementary Motion Unit (EMU) which correlates illumination levels recorded at different instants and positions of the image. To implement EMUs only photo-pixel, delays and multipliers are required, and multipliers can even be substituted with other forms of comparator simpler to design (Gottardi and Yang, 1993;Harrison, 2005;Pant and Higgins, 2007). ...

A Biomimetic Focal Plane Speed Computation Architecture
  • Citing Article
  • June 2007

... The membership of, and interactions within, these smaller groups may be dependent on a metric distance, for example, the zonal model of shoaling fish [7], or a topological distance, as in the model of flocking starlings [8]. Models of perception have also been developed to investigate the use of optic flow in birds [9] and bats [10] and even visual tracking to aid collision avoidance in insects [11]. ...

Tracking improves performance of biological collision avoidance models
  • Citing Article
  • June 2012

Biological Cybernetics

... A similar work was done in [30], yet without any information on its real-time performance. On the other hand, some VLSI implementations based on analog application-specific chips were proposed [28], [31]- [33]. In these systems, multiple chips were required to compute the motion energy, and a general-purpose computer was used for velocity integration. ...

Multi-chip Implementation of a Biomimetic VLSI Vision Sensor Based on the Adelson-Bergen Algorithm
  • Citing Conference Paper
  • January 2003

Lecture Notes in Computer Science

... The spatially separate pooling of front-end signals inFig.·1E would correspond to different compartments within the dendritic arborization of the tangential cell, which is plausible given the massive extent of these arborizations. This interpretation is consistent with previous modeling work on the emergence of directional selectivity across the medulla–lobula plate projection (Melano and Higgins, 2005). Specifically, Melano and Higgins placed the relevant nonlinearity between T5 and subsequent pooling by the tangential cell [see theirfig.·1b, ...

The neuronal basis of direction selectivity in lobula plate tangential cells
  • Citing Article
  • June 2005

Neurocomputing

... Similarly to [106], the integrated response can be used as a visual odometer over time. Moreover, there are also studies on the temporal adaptation of EMDs [38,89], the contrast sensitivity of EMDs [178], and an EMDs-based algorithm for global motion estimation [140], as well as a non-directional HR detectors model for simulating insect speed-dependent behaviour [96], and so on. is adapted from [216], (h) is adapted from [55]. ...

Contrast saturation in a neuronally-based model of elementary motion detection
  • Citing Article
  • June 2005

Neurocomputing

... "Center-surround" orientation selectivity has been mathematically modeled in numerous ways, including Gabor wavelets (Adelson and Bergen 1985) and by using a difference-of-Gaussians (DoG) function (Rodieck 1965). The leading model of orientation selectivity in insects supports a direct neuronal implementation of the DoG model (Rivera-Alvidrez et al. 2011). This model, based on both electrophysiological and neuroanatomical evidence, makes use of spatial spreading of photoreceptor inputs by two distinct types of amacrine cells that results in two Gaussian-blurred versions of the input image. ...

A neuronally based model of contrast gain adaptation in fly motion vision
  • Citing Article
  • August 2011

Visual Neuroscience