(a) OSCAR II, like its predecessor OSCAR I, is a tethered aerial robot that orients its heading about the vertical (i.e., yaw) axis by driving 

(a) OSCAR II, like its predecessor OSCAR I, is a tethered aerial robot that orients its heading about the vertical (i.e., yaw) axis by driving 

Source publication
Article
Full-text available
Neurobiological and neuroethological findings on insects can be used to design and construct small robots controlling their navigation on the basis of bio-inspired visual strategies and circuits. Animals' visual guidance is partly mediated by motion-sensitive neurons, which are responsible for gauging the optic flow. Although neurons of this kind w...

Citations

... Accordingly, the emphasis herein is laid on exploring its potential upon estimating dynamic complexity of visual scene in mobile robotics. There have been relevant robotic dynamic vision systems taken inspiration from nature that work effectively to encode diverse motion cues [1], [19]. The motivation to select the AVDM to estimate visual dynamic complexity is its larger independence on spatial frequency and image contrast compared to other related works in current repository. ...
Conference Paper
Paper accepted: Visual dynamic complexity is ubiquitous, hidden attribute of the visual world that every motion-sensitive vision system is faced with. However, it is implicit and intractable which has never been quantitatively described due to difficulty in defending temporal features correlated to spatial image complexity. Learning from biological visual processing, we propose a novel bio-robotic approach to estimate visual dynamic complexity, effectively and efficiently, which can be used as a new metric for assessing dynamic vision systems implemented in robots. Here we apply a bio-inspired neural network model to quantitatively estimate such complexity associated with spatial-temporal frequency of moving visual scene. The model is implemented in an autonomous micro-mobile robot navigating freely in an arena encompassed by visual walls displaying moving scenes. The response of the embedded visual module can make reasonable prediction on surrounding dynamic complexity since it can be mapped monotonically to varying moving frequencies of visual scene. The experiments demonstrate this ``predictor" is effective against different visual scenarios that can be established as a new metric for assessing visual systems. To prove its viability, we utilise it to investigate the performance boundary of a collision detection visual system in changing environment with increasing dynamic complexity.
... Both global views of the scenes and inertial information are still sensed and processed by the brain latently. Therefore, the fusion of event, intensity, and inertial information has support from bionic research [4,5]. Now, several event cameras, such as DAVIS 346 and CeleX-4, provide normal intensity images, angular velocity, and acceleration from embedded IMU (Inertial Measurement Unit), supporting the feasibility of using complementary information for feature tracking. ...
Article
Full-text available
Achieving efficient and accurate feature tracking on event cameras is a fundamental step for practical high-level applications, such as simultaneous localization and mapping (SLAM) and structure from motion (SfM) and visual odometry (VO) in GNSS (Global Navigation Satellite System)-denied environments. Although many asynchronous tracking methods purely using event flow have been proposed, they suffer from high computation demand and drift problems. In this paper, event information is still processed in the form of synthetic event frames to better adapt to the practical demands. Weighted fusion of multiple hypothesis testing with batch processing (WF-MHT-BP) is proposed based on loose integration of event, intensity, and inertial information. More specifically, with inertial information acting as priors, multiple hypothesis testing with batch processing (MHT-BP) produces coarse feature-tracking solutions on event frames in a batch processing way. With a time-related stochastic model, a weighted fusion mechanism fuses feature-tracking solutions from event and intensity frames compared with other state-of-the-art feature-tracking methods on event cameras. Evaluation on public datasets shows significant improvements on accuracy and efficiency and comparable performances in terms of feature-tracking length.
... Although the biological substrates of ON/OFF channels have been remaining largely unknown due to many technical obstacles in neuroanatomy and neurophysiology, the computational modelling and biorobotic methods are undoubtedly powerful tools to simulate visual signal processing based on ON/OFF channels dealing with real-world challenges. The ON/OFF channel based models could also provide useful feedback information, or implications, or hypothesis back to neuroscience after experimenting models using both synthetic and natural stimuli comparable to those in physiological experiments, as well as implementing models in real bodies like robots, in real environments under similar physical settings in ethological trials on animals [5], [6], [7]. Therefore, this survey is driven by mainly the following objectives as 1) articulating currently known concepts of early visual motion processing within parallel ON/OFF channels in animals; 2) summarising present mathematical/computational modelling works and applications based on such bioplausible, signal-bifurcating structures; 3) highlighting significance of ON/OFF channels upon implementing different linear/non-linear functionality towards specific direction selectivity of dynamic vision models, in a both effective and efficient manner; 4) demonstrating robustness of ON/OFF channels in realtime, feed-forward visual processing against complex dynamic background with high input variability; 5) predicting future trends based on ON/OFF channels in AI technology via bridging brain signal processing mechanisms to computer models and advanced, neuromorphic sensors such as event-driven cameras; 6) exemplifying promising approach to closing the gap between neuroscience and AI. ...
... These naturally evolved dynamic vision systems have been providing us with a rich source of inspiration on modelling, engineering, and application. Franceschini gave an elegant overview of both fly motion vision including ON and OFF elementary motion detectors (EMD), and applications on various flying robots, vehicles; more importantly, he explained nicely the mutual promotion between neuroscience and bio-robotics [7]. With similar ideas, Serres and Ruffier proposed a detailed review on optic flow (OF)-based methods in bio-robotics [6]. ...
... In this research we have just focused on the ON-OFF-RGC that are local motion-sensing cells combining direction selective polarity signals from the ON-SAC and the OFF-SAC; the RGC each covers only a few degrees of the RF. Conversely, this fusion happens on the large dendrites of wide-field, motion-sensing tangential cells with a RF diameter up to 180 degrees; no local motion-sensing neurons have been reported so far 7 in the invertebrates' visual systems that are sensitive to both ON-contrast and OFF-contrast stimuli. 3) Furthermore, the DO responses exhibited by the LPTC in flies is not found in the RGC of vertebrates. ...
Preprint
Visual motion perception is an essential ability for sighted animals, and artificially intelligent systems interacting effectively, safely with surrounding objects and environments. Biological visual neural systems that have naturally evolved over hundreds-million years are quite efficient, and robust for motion perception. However, current artificial visual systems are far from such capability. Here we propose this gap can be significantly closed by computational modelling of biological visual systems with formulation of ON/OFF channels coping with highly variable input natural signals, in parallel. Such signal-bifurcating structure has been found by neuroscientists in many animal species which articulates early visual motion is split and processed in segregated pathways depending on luminance increment (ON) and decrement (OFF) responses in animals' receptive field. The corresponding biological substrates, and the necessity for artificial visual systems, however, have not yet been completely elucidated, leaving concerns on -- why and how biological visual systems process motion in parallel computation; whether the signal bifurcation can boost artificial visual systems in order to address real-world motion detection challenges. This paper focuses on addressing these questions, through summarising current progress, and highlighting the significance of ON/OFF channels in building robust motion perception visual systems. This contributes a pioneering, in-depth survey on computational modelling of ON/OFF channels towards realising different motion sensitivity and selectivity in artificially dynamic vision systems including looming, translating, small target detection models and corresponding applications. This survey for the first time provides insights into how such a bio-plausible computational structure works effectively to achieve the functionality of different motion-sensitive neural models in keeping with the soundness and robustness of biological principles. This survey predicts future trends based on ON/OFF channels for visual computation, demonstrates a promising approach to bridging brain signal processing mechanisms to computer models and advanced neuromorphic sensors such as event-driven cameras.
... Robust and efficient collision prediction system is ubiquitous amongst the vast majority of sighted animals. As a source of inspiration, the insects' dynamic vision systems have been explored as powerful paradigms for collision detection and avoidance with numerous applications in machine vision, as reviewed in (Franceschini, 2014;Serres and Ruffier, 2017;Fu et al., 2018a, Fu et al., 2019b. As a prominent example, locusts can migrate for a long distance in dense swarms containing hundreds to thousands of individuals free of collision (Kennedy, 1951). ...
Article
Full-text available
Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locust's LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.
... Flies are the most famous model system to study biological motion detection strategies, for reviews see [18]- [20]. The fly visual systems have been broadly applied for various realworld machine navigation applications including micro-aerial vehicles, unmanned aerial vehicles, and ground robotics, for reviews see [4], [21]- [23]. ...
Conference Paper
Full-text available
This paper been accepted for presentation at the IJCNN 2021 and for publication in the conference proceedings published by IEEE. Abstract: This paper aims at addressing a challenging problem on reliably estimating image motion against highly variable natural signals which artificial dynamic vision systems are faced with. Previously, the visual system response always represents fluctuation and high variance influenced by spatial contrast, the local difference between neighbouring luminance values. Effective contrast computation is therefore a prerequisite for robust motion vision. In this regard, sighted animals such as flies are remarkably adept at estimating image motion regardless of image statistics by rapidly adjusting contrast sensitivity. Current artificial visual systems, however, cannot account for this capability. Learning from neuroscience, here we propose contrast vision computation to improve a state-of-the-art, bio-inspired neural network model for background motion estimation. This includes mainly two neural computation schemes of (1) an instantaneous, feedback divisive contrast normalisation prior to motion correlation in the ON and OFF pathways to reduce local contrast sensitivity, (2) parallel contrast pathways influencing ON/OFF motion signals, negatively, at the pooling output layer to suppress high-contrast optic flows. We created a dataset of many shifting natural images with high input variability to investigate the proposed method. The experiments have demonstrated the effectiveness and robustness of the proposed contrast vision computation to reduce response fluctuation and variance against natural signals. The fidelity of motion perception thus has been significantly increased. The proposed methods could be generic to other motion vision models dealing with high-contrast visual scenes.
... Flies are the most famous model system to study biological motion detection strategies, for review see [18]- [20]. The fly visual systems have been broadly applied for various realworld machine navigation applications including micro-aerial vehicles, unmanned aerial vehicles, and ground robotics, for review see [4], [21]- [23]. ...
Preprint
This paper aims at addressing a challenging problem which artificial dynamic vision systems are faced with, that is, reliably extracting motion cues from highly variable natural signals. The visual system response always represents fluctuations and high variance influenced by spatial contrast, the local difference between neighbouring luminance values. Robust contrast computation is therefore a prerequisite for dynamical visual processing. In this regard, sighted animals, such as flies, are remarkably adept at estimating image motion regardless of image statistics, by rapidly adjusting contrast sensitivity. Current artificial visual systems, however, cannot account for this robustness. Learning from recent neuroscience progress, here we propose novel modelling of contrast vision to improve a state-of-the-art, bio-inspired neural network model for natural image motion estimation. This includes mainly two neural computation schemes of (1) an instantaneous, feedback divisive contrast normalisation prior to motion correlation in the ON and OFF pathways to reduce local contrast sensitivity, (2) a parallel contrast pathway influencing motion signals, negatively, at the final pooling, output layer to suppress high-contrast local motion. We used a hundred different, shifting natural images with high input variability to investigate the model. The comparative experiments have demonstrated the effectiveness and robustness of the proposed contrast vision computations to reduce response fluctuation and variance against natural signals. The fidelity of motion perception has been significantly increased. The proposed methods could be generic to other motion vision models dealing with high-contrast visual scene.
... Flying insects successfully cope with similar restrictions and have thus served as a rich source of inspiration for creating autonomously flying robots 2 . In their turn, the robots can be used as embodied models of flying insects for testing hypotheses that are relevant to biology 3,4 . ...
Article
Full-text available
Flying insects employ elegant optical-flow-based strategies to solve complex tasks such as landing or obstacle avoidance. Roboticists have mimicked these strategies on flying robots with only limited success, because optical flow (1) cannot disentangle distance from velocity and (2) is less informative in the highly important flight direction. Here, we propose a solution to these fundamental shortcomings by having robots learn to estimate distances to objects by their visual appearance. The learning process obtains supervised targets from a stability-based distance estimation approach. We have successfully implemented the process on a small flying robot. For the task of landing, it results in faster, smooth landings. For the task of obstacle avoidance, it results in higher success rates at higher flight speeds. Our results yield improved robotic visual navigation capabilities and lead to a novel hypothesis on insect intelligence: behaviours that were described as optical-flow-based and hardwired actually benefit from learning processes. Autonomous flight is challenging for small flying robots, given the limited space for sensors and on-board processing capabilities, but a promising approach is to mimic optical-flow-based strategies of flying insects. A new development improves this technique, enabling smoother landings and better obstacle avoidance, by giving robots the ability to learn to estimate distances to objects by their visual appearance.
... Various groups have proposed vision-based control algorithms and hardware for the new generation of nano-UAVs, inspired by insect vision [e.g., Franceschini et al., 2007, Franceschini, 2014, Briod et al., 2013, Duhamel et al., 2013. Lightweight cameras perform well when lighting is good and reliable. ...
... As a variation of the EMD, a few methods were proposed to decode or estimate the angular velocity accounting for various flight behaviours of bees (Brinkworth and O'Carroll 2009;Cope et al. 2016;Wang et al. 2019a, b). Benefiting from the computational efficiency and robustness, many OF-based methods have been applied for near-range navigation of flying robots and micro-aerial vehicles, as reviewed in (Franceschini 2014;Serres and Ruffier 2017). ...
Article
Full-text available
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds. Full text can be also accessed at https://rdcu.be/b5qS4
... tions [26]. For computationally implementing the LPTCs (or called fly direction selective neurons, DSNs), there are two main perspectives: 1) the well-known bio-inspired optic flow (OF)-based approaches apply elementary motion detectors (EMDs) to reach the preliminary level of LPTCs at local optical unit level, which have been broadly used in flying robots and micro aerial vehicles, as reviewed in [6], [27]; 2) behind the OF-based level, a recent LPTCs model mimics the Drosophila visual processing through multiple neuro-layers for decoding translating-object direction against cluttered backgrounds [28]. ...
Conference Paper
Full-text available
This paper has been included in the Proceedings of the 2020 IEEE International Conference on Advanced Robotics and Mechatronics (ARM), in December, at Shenzhen. ABSTRACT: Inspired by insects' visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-field motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in flies, have been studied, intensively. The LGMDs have specific selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To fill this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD-2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented in ground micro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.