About
81
Publications
33,962
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,694
Citations
Introduction
Current institution
Additional affiliations
August 2007 - May 2012
Publications
Publications (81)
Neuromorphic computing is a brain-inspired approach to hardware and algorithm design that efficiently realizes artificial neural networks. Neuromorphic designers apply the principles of biointelligence discovered by neuroscientists to design efficient computational systems, often for applications with size, weight and power constraints. With this r...
Achieving personalized intelligence at the edge with real-time learning capabilities holds enormous promise to enhance our daily experiences and assist in decision-making, planning, and sensing. Yet, today's technology encounters difficulties with efficient and reliable learning at the edge, due to a lack of personalized data, insufficient hardware...
A critical enabler for progress in neuromorphic computing research is the ability to transparently evaluate different neuromorphic solutions on important tasks and to compare them to state-of-the-art conventional solutions. The Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge), inspired by the Microsoft DNS Challenge, tack...
A critical enabler for progress in neuromorphic computing research is the ability to transparently evaluate different neuromorphic solutions on important tasks and to compare them to state-of-the-art conventional solutions. The Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge), inspired by the Microsoft DNS Challenge, tack...
The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables, which is distinct from the stateless neuron models used in deep learning. The new version of Intel’s neuromorphic research processor, Loihi 2, supports an extended range of stateful spiking neuron models with programmable dyn...
Event-based vision sensors show great promise for use in embedded applications requiring low-latency passive sensing at a low computational cost. In this paper, we present an event-based algorithm that relies on an Extended Kalman Filter for 6-Degree of Freedom sensor pose estimation. The algorithm updates the sensor pose event-by-event with low la...
The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables - very different from the stateless neuron models used in deep learning. The next version of Intel's neuromorphic research processor, Loihi 2, supports a wide range of stateful spiking neuron models with fully programmable dyn...
Deep artificial neural networks apply principles of the brain's information processing that led to breakthroughs in machine learning spanning many problem domains. Neuromorphic computing aims to take this a step further to chips more directly inspired by the form and function of biological neural circuits, so they can process new knowledge, adapt,...
This paper presents a long-term object tracking framework with a moving event camera under general tracking conditions. A first of its kind for these revolutionary cameras, the tracking framework uses a discriminative representation for the object with online learning, and detects and re-tracks the object when it comes back into the field-of-view....
We present the Surrogate-gradient Online Error-triggered Learning (SOEL) system for online few-shot learning on neuromorphic processors. The SOEL learning system uses a combination of transfer learning and principles of computational neuroscience and deep learning. We show that partially trained deep s (s) implemented on neuromorphic hardware can r...
This paper presents a long-term object tracking framework with a moving event camera under general tracking conditions. A first of its kind for these revolutionary cameras, the tracking framework uses a discriminative representation for the object with online learning, and detects and re-tracks the object when it comes back into the field-of-view....
We present the Surrogate-gradient Online Error-triggered Learning (SOEL) system for online few-shot learningon neuromorphic processors. The SOEL learning system usesa combination of transfer learning and principles of computa-tional neuroscience and deep learning. We show that partiallytrained deep Spiking Neural Networks (SNNs) implemented onneuro...
Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional...
Neuromorphic computing applies insights from neuroscience to uncover innovations in computing technology. In the brain, billions of interconnected neurons perform rapid computations at extremely low energy levels by leveraging properties that are foreign to conventional computing systems, such as temporal spiking codes and finely parallelized proce...
We present the first purely event-based, energy-efficient approach for dynamic object detection and categorization with a freely moving event camera. Compared to traditional cameras, event-based object recognition systems are considerably behind in terms of accuracy and algorithmic maturity. To this end, this paper presents an event-based feature e...
In this article, we present a systematic computational model to explore brain-based computation for object recognition. The model extracts temporal features embedded in address-event representation (AER) data and discriminates different objects by using spiking neural networks (SNNs). We use multispike encoding to extract temporal features containe...
With the success of deep learning, object recognition systems that can be deployed for real-world applications are becoming commonplace. However, inference that needs to largely take place on the `edge' (not processed on servers), is a highly computational and memory intensive workload, making it intractable for low-power mobile nodes and remote se...
Recent work suggests that synaptic plasticity dynamics in biological models of neurons and neuromorphic hardware are compatible with gradient-based learning (Neftci_et. al, 19). Gradient-based learning requires iterating several times over a dataset, which is both time-consuming and constrains the training samples to be independently and identicall...
In this paper, we present EBBIOT-a novel paradigm for object tracking using stationary neuromorphic vision sensors in low-power sensor nodes for the Internet of Video Things (IoVT). Different from fully event based tracking or fully frame based approaches, we propose a mixed approach where we create event-based binary images (EBBI) that can use mem...
In the version of this chapter that was originally published, the funding information given at the bottom of the first page was not correct. This has been updated so that the new version now reads: “Supported by Temasek Research Fellowship.”
We present the first purely event-based, energy-efficient approach for object detection and categorization using an event camera. Compared to traditional frame-based cameras, choosing event cameras results in high temporal resolution (order of microseconds), low power consumption (few hundred mW) and wide dynamic range (120 dB) as attractive proper...
Artificial neural networks have become ubiquitous in modern life, which has triggered the emergence of a new class of application specific integrated circuits for their acceleration. ReRAM-based accelerators have gained significant traction due to their ability to leverage in-memory computations. In a crossbar structure, they can perform multiply-a...
We present the first purely event-based, energy-efficient approach for object detection and categorization using an event camera. Compared to traditional frame-based cameras, choosing event cameras results in high temporal resolution (order of microseconds), low power consumption (few hundred mW) and wide dynamic range (120 dB) as attractive proper...
Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results in a stream of events, which encode the time, location and sign of the brightness changes. Event cameras posses outstanding properties comp...
Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results in a stream of events, which encode the time, location and sign of the brightness changes. Event cameras posses outstanding properties comp...
Interest in event-based vision sensors has proliferated in recent years, with innovative technology becoming more accessible to new researchers and highlighting such sensors' potential to enable low-latency sensing at low computational cost. These sensors can outperform frame-based vision sensors regarding data compression, dynamic range, temporal...
Configuring deep Spiking Neural Networks (SNNs) is an exciting research avenue for low power spike event based computation. However, the spike generation function is non-differentiable and therefore not directly compatible with the standard error backpropagation algorithm. In this paper, we introduce a new general backpropagation mechanism for lear...
In recent years, neuromorphic computing has become an important emerging research area. Neuromorphic computing takes advantage of computer architectures and sensors whose design and functionality are inspired by the brain. There has been rapid progress in computational theory, spiking neurons, learning algorithms, signal processing, circuit design...
Vision processing with Dynamic Vision Sensors (DVS) is becoming increasingly popular. This type of bio-inspired vision sensor does not record static scenes. DVS pixel activity relies on changes in light intensity. In this paper, we introduce a platform for object recognition with a DVS in which the sensor is installed on a moving pan-tilt unit in c...
Asynchronous event-based sensors, or “silicon retinae,” are a new class of vision sensors inspired by biological vision systems. The output of these sensors often contains a significant number of noise events along with the signal. Filtering these noise events is a common preprocessing step before using the data for tasks such as tracking and class...
As the interest in event-based vision sensors for mobile and aerial applications grows, there is an increasing need for high-speed and highly robust algorithms for performing visual tasks using event-based data. As event rate and network structure have a direct impact on the power consumed by such systems, it is important to explore the efficiency...
We introduce a new event-based visual descriptor, termed as distribution aware retinal transform (DART), for pattern recognition using silicon retina cameras. The DART descriptor captures the information of the spatio-temporal distribution of events, and forms a rich structural representation. Consequently, the event context encoded by DART greatly...
We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed a...
This paper describes a fully spike-based neural network for optical flow estimation from Dynamic Vision Sensor data. A low power embedded implementation of the method which combines the Asynchronous Time-based Image Sensor with IBM's TrueNorth Neurosynaptic System is presented. The sensor generates spikes with sub-millisecond resolution in response...
This paper describes a fully spike-based neural network for optical flow estimation from Dynamic Vision Sensor data. A low power embedded implementation of the method which combines the Asynchronous Time-based Image Sensor with IBM's TrueNorth Neurosynaptic System is presented. The sensor generates spikes with sub-millisecond resolution in response...
Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are...
We present a new passive and low power localization method for quadcopter UAVs (Unmanned aerial vehicles) by using dynamic vision sensors. This method works by detecting the speed of rotation of propellers that is normally higher than the speed of movement of other objects in the background. Dynamic vision sensors are fast and power efficient. We h...
This paper describes novel event-based spatiotemporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the as...
Bio-inspired Address Event Representation change detection image sensors, also known as silicon retinae, have matured to the point where they can be purchased commercially, and are easily operated by laymen. Noise is present in the output of these sensors, and improved noise filtering will enhance performance in many applications. A novel approach...
The growing demands placed upon the field of computer vision have renewed the focus on alternative visual scene representations and processing paradigms. Silicon retinea provide an alternative means of imaging the visual environment, and produce frame-free spatio-temporal data. This paper presents an investigation into event-based digit classificat...
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast d...
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based...
Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processin...
This paper introduces a spiking hierarchical model for object recognition
which utilizes the precise timing information inherently present in the output
of biologically inspired asynchronous Address Event Representation (AER) vision
sensors. The asynchronous nature of these systems frees computation and
communication from the rigid predetermined ti...
This paper describes the application of an event-based dynamic and active pixel vision sensor (DAVIS) for racing human vs. computer on a slot car track. The DAVIS is mounted in "eye-of-god" view. The DAVIS image frames are only used for setup and are subsequently turned off because they are not needed. The dynamic vision sensor (DVS) events are the...
Real-time visual identification and tracking of objects is a computationally intensive task, particularly in cluttered environments which contain many visual distracters. In this paper we describe a real-time bio-inspired system for object tracking and identification which combines an event-based vision sensor with a convolutional neural network ru...
Visual motion estimation is a computationally intensive, but important task for sighted animals. Replicating the robustness and efficiency of biological visual motion estimation in artificial systems would significantly enhance the capabilities of future robotic agents. Twenty five years ago, in this very journal, Carver Mead outlined his argument...
This paper presents a frame-free time-domain imaging approach designed to alleviate the non-ideality of finite exposure measurement time (intrinsic to all integrating imagers), limiting the temporal resolution of the ATIS asynchronous time-based image sensor concept. The method uses the time-domain correlated double sampling (TCDS) and change detec...
Current interest in neuromorphic computing continues to drive development of sensors and hardware for spike-based computation. Here we describe a hierarchical architecture for visual motion estimation which uses a spiking neural network to exploit the sparse high temporal resolution data provided by neuromorphic vision sensors. Although spike-based...
Reliable visual motion estimation is typically regarded as a difficult problem. Noise sensitivity and computational requirements often prohibit effective real-time application on mobile platforms. Despite these difficulties, biological systems reliably estimate visual motion in real-time and heavily rely on it. Here we present an FPGA implementatio...
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has...
Compressive sensing has allowed for reconstruction of missing pixels in incomplete images with higher accuracy than was previously possible. Moreover, video data or sequences of images contain even more correlation, leading to a much sparser representation as demonstrated repeatedly in numerous digital video formats and international standards. Com...
We present a software simulation and a hardware proof of concept for a compact low-power lightweight ultrasonic echolocation design that is capable of imaging a 120 field of view with a single ping. The sensor uses a single transmitter and a linear array of ten microphones, followed by a bank of eight spatiotemporal filters to determine the bearing...
Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for config...
Recently there has been an increasing interest in application of bio-mimetic controller s and neuromorphic vision sensor s to planetary landing tasks. Within this context, we present combined low-level (SPICE) and high-level (behavioral) simulations of a novel neuromorphic VLSI vision sensor in a realistic planetary landing scenar io. We use result...
We describe a robotic system consisting of an arm and an active vision system learns to align its sensory and motor maps so that it can successfully reach the tip of its arm to touch the point where it is looking. This system uses an unsupervised Hebbian learning algorithm, and learns the alignment by watching its arm waving in front of its eyes. A...
A low-power, compact, lightweight architecture for an in-air SONAR device is proposed and simulated. The sensor is intended to aid in the autonomous navigation of a micro-unmanned aerial vehicle. Inspired by the manner in which bats use ultrasound to sense and navigate their environments, the sensor transmits a single ping and uses an array of smal...
There are various neuron models which can be used to emulate the neural networks responsible for cortical and spinal processes. One example is the Central Pattern Generator (CPG) networks, which are spinal neural circuits responsible for controlling the timing of periodic systems in vertebrates. In order to model the CPG effectively, it is necessar...
The cross-correlation function is an important yet computationally intensive processing step in many engineering applications such as wireless communication and object recognition. A neuromorphic approach to this function has been shown to facilitate implementation using a neural-based architecture. Using a custom designed array of silicon neurons...
In limbed animals, spinal neural circuits responsible for controlling muscular activities during walking are called central pattern generators (CPG). CPG networks display oscillatory activities that actuates individual or groups of muscles in a coordinated fashion so that the limbs of the animal are flexed and extended at the appropriate time and w...