Technical ReportPDF Available

Frequency-Domain Characterization of Optic Flow and Vision-Based Ocellar Sensing for Rotational Motion

Authors:
  • Reality Labs

Abstract and Figures

The structure of an animal's eye is determined by the tasks it must perform. While vertebrates rely on their 2 eyes for all visual functions, insects have evolved a wide range of specialized visual organs to support behaviors such as prey capture, predator evasion, mate pursuit, flight stabilization, and navigation. Compound eyes and ocelli constitute the vision-forming and sensing mechanisms of some flying insects. They provide signals useful for flight stabilization and navigation. In contrast to the well-studied compound eye, the ocelli, seen as the second visual system, sense fast luminance changes and allow for fast visual processing. Using a luminance-based sensor that mimics the insect ocelli and a camera-based motion-detection system, frequency-domain characterization of an ocellar sensor and optic flow (due to rotational motion) is analyzed. Inspired by the insect neurons that make use of signals from both vision-sensing mechanisms, complementary properties of ocellar and optic flow estimates are discussed.
Content may be subject to copyright.
ARL-TR-7974 APR 2017
US Army Research Laboratory
Frequency-Domain Characterization of Optic
Flow and Vision-Based Ocellar Sensing for
Rotational Motion
by Nil Z Gurel, Joseph K Conroy, Timothy Horiuchi, and
J Sean Humbert
Approved for public release; distribution is unlimited.
NOTICES
Disclaimers
The findings in this report are not to be construed as an official Department of the
Army position unless so designated by other authorized documents.
Citation of manufacturer’s or trade names does not constitute an official
endorsement or approval of the use thereof.
Destroy this report when it is no longer needed. Do not return it to the originator.
ARL-TR-7974 APR 2017
US Army Research Laboratory
Frequency-Domain Characterization of Optic
Flow and Vision-Based Ocellar Sensing for
Rotational Motion
by Nil Z Gurel
Graduate Student, University of Maryland
Joseph K Conroy
Sensors and Electron Devices Directorate, ARL
Timothy Horiuchi
Associate Professor, University of Maryland
J Sean Humbert
Associate Professor, University of ColoradoBoulder
Approved for public release; distribution is unlimited.
ii
REPORT DOCUMENTATION PAGE
Form Approved
OMB No. 0704-0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the
data needed, and completing and reviewing the collection information. Send comments regarding this burden estimate o r any other aspect of this collection of information, including suggestions for reducing the
burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302.
Responde nts should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently
valid OMB c ontrol number.
PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.
1. REPORT DATE (DD-MM-YYYY)
April 2017
2. REPORT TYPE
Technical Report
3. DATES COVERED (From To) 4/1/2016-8/1/2016
04/201607/2016
4. TITLE AND SUBTITLE
Frequency-Domain Characterization of Optic Flow and Vision-Based Ocellar
Sensing for Rotational Motion
5a. CONTRACT NUMBER
5b. GRANT NUMBER
5c. PROGRAM ELEMENT NUMBER
6. AUTHOR(S)
Nil Z Gurel, Joseph K Conroy, Timothy Horiuchi, and J Sean Humbert
5d. PROJECT NUMBER
5e. TASK NUMBER
5f. WORK UNIT NUMBER
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
US Army Research Laboratory
ATTN: RDRL-SER-L
2800 Powder Mill Rd
Adelphi MD 20783-1132
8. PERFORMING ORGANIZATION REPORT NUMBER
ARL-TR-7974
9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)
10. SPONSOR/MONITORS ACRONYM(S)
11. SPONSOR/MONITORS REPORT NUMBER(S)
12. DISTRIBUTION/AVAILABILITY STATEMENT
Approved for pubic release; distribution is unlimited.
13. SUPPLEMENTARY NOTES
14. ABSTRACT
The structure of an animal’s eye is determined by the tasks it must perform. While vertebrates rely on their 2 eyes for all
visual functions, insects have evolved a wide range of specialized visual organs to support behaviors such as prey capture,
predator evasion, mate pursuit, flight stabilization, and navigation. Compound eyes and ocelli constitute the vision-forming
and sensing mechanisms of some flying insects. They provide signals useful for flight stabilization and navigation. In contrast
to the well-studied compound eye, the ocelli, seen as the second visual system, sense fast luminance changes and allow for
fast visual processing. Using a luminance-based sensor that mimics the insect ocelli and a camera-based motion-detection
system, frequency-domain characterization of an ocellar sensor and optic flow (due to rotational motion) is analyzed. Inspired
by the insect neurons that make use of signals from both vision-sensing mechanisms, complementary properties of ocellar and
optic flow estimates are discussed.
15. SUBJECT TERMS
Bio-inspired sensing, rotational motion sensing, vision-based sensing, micro air vehicles, unmanned aerial vehicles, ocelli
16. SECURITY CLASSIFICATION OF:
17. LIMITATION
OF
ABSTRACT
UU
18. NUMBER
OF
PAGES
72
19a. NAME OF RESPONSIBLE PERSON
Joseph Conroy
b. ABSTRACT
Unclassified
c. THIS PAGE
Unclassified
19b. TELEPHONE NUMBER (Include area code)
301-394-2333
Standard Form 298 (Rev. 8/98)
Prescribed by ANSI Std. Z39.18
Approved for public release; distribution is unlimited.
iii
Contents
List of Figures v
List of Tables viii
1. Introduction 1
1.1 Motivation 1
1.2 Contributions 2
2. Background 2
2.1 Structure and Function of Compound Eye 2
2.2 Structure and Function of Ocelli 4
2.3 Prior Works Inspired by Insect Ocelli 5
3. Frequency-Domain Characterization of Ocellar Sensor and Optic
Flow 6
3.1 Introduction 6
3.2 Ocellar Sensor 7
3.3 Mathematical Model for the Ocellar Sensor 12
3.4 Optic Flow Computation 15
3.5 Experimental Setup 18
3.6 Magnitude-Squared Coherence 24
3.7 Ground Truth 25
3.8 Understanding Ocellar Sensor’s “Valid Range” 26
3.9 Ocellar Sensor Frequency Characterization 29
3.9.1 Circuit Frequency Characterization 29
3.9.2 Sensor vs. Ground Truth Frequency Characterization 33
3.10 Ocellar SensorGyro Voltage-Velocity Mapping 35
3.11 Performance-Related Parameters 36
3.11.1 Frame Rate 36
3.11.2 Window Size 38
3.11.3 Feature Points 40
Approved for public release; distribution is unlimited.
iv
3.11.4 Luminance Intensity 41
3.11.5 Photodiode Bending 42
3.12 Test Setup Limitations 43
4. Sensor Fusion 44
4.1 Biological Background for Sensor Fusion 44
4.2 Fusion Approaches 45
4.3 Previous Sensor Fusion Implementations 46
4.4 Ocellar Sensor-Optic Flow Fusion Approach 47
5. Conclusion and Future Work 52
5.1 Conclusion 52
5.2 Future Work 53
6. References 55
List of Symbols, Abbreviations, and Acronyms 61
Distribution List 62
Approved for public release; distribution is unlimited.
v
List of Figures
Fig. 1 Insect compound eye and ocelli (image used with Wikimedia
Commons permissions:
https://commons.wikimedia.org/wiki/File:Polistes_ocelli.jpg) .............2
Fig. 2 Structure of compound eye (image used with Wikimedia Commons
permissions:
https://commons.wikimedia.org/wiki/File:Insect_compound_eye_diagr
am.svg) ...................................................................................................3
Fig. 3 Structure of ommatidium (image used with permission from
Cronodon.com: http://cronodon.com/Copyright.html) ..........................3
Fig. 4 Ocellus cross section (image used with Wikimedia Commons
permissions:
https://commons.wikimedia.org/wiki/File:Insect_ocellus_diagram.svg;
see Ref. 16) ............................................................................................4
Fig. 5 Circuit schematics of the 3-stage ocellar sensor. TSL14S photodiode
outputs are band-pass filtered and antagonistically subtracted. Pitch
rate (front–back) is inverted for sign change. ........................................8
Fig. 6 Band-pass filter, with high-pass cutoff at 17 Hz and low-pass cutoff at
145 Hz ....................................................................................................9
Fig. 7 Subtractor and inverter: Subtractor is used for antagonistic subtraction
of filtered signals. Inverter is used for sign change for pitch rate. For
equal resistors in both blocks, direct subtraction and direct inversion is
satisfied. ...............................................................................................10
Fig. 8 Simulated circuit in TI TINA simulation software ..............................11
Fig. 9 Band-pass filter simulation results: 13.78 Hz high-pass cutoff and 174
Hz low-pass cutoff is observed. Phase starts at –90° at 1 mHz and
reaches to –270° at 100 kHz. ...............................................................11
Fig. 10 Mathematical model and assumptions: Photodiode in rotational motion
sees the arbitrary luminance pattern as its azimuthal angle varies ......12
Fig. 11 Optic flow vector for a pixel between 2 consecutive frames ...............16
Fig. 12 Optic flow during rotational and translational motion: without
translational component (V), optic flow is an estimate of only angular
velocity (ω) ..........................................................................................19
Fig. 13 Illustration of test setup: Light source has its own DC supply to avoid
issues of flickering; information from camera, microcontroller unit
(MCU), gyro, and analog-to-digital converter (ADC) are transferred to
the host computer via a USB hub. .......................................................19
Fig. 14 Camera scene (376 × 240 pixel image): DC light source is not in the
FOV of the camera, which is moving along the x-direction ................20
Approved for public release; distribution is unlimited.
vi
Fig. 15 Overall test setup: Ocellar sensor is positioned in front of light source.
Motor is giving rotational motion to the setup along its shaft axis. The
motor shaft is in vertical orientation, moving the components on it. ..21
Fig. 16 Test setup components: Camera sees the scene shown in Fig. 17.
Camera on the right is not used due to performance issues. ................21
Fig. 17 System block diagram: All of the data collected are stored in laptop .22
Fig. 18 Serial message structure from ocelli to microcontroller includes 2
header, ocelli data, and gyro data bytes ...............................................22
Fig. 19 Post-processing block diagram: Optic flow vectors are computed and
extracted as a text file. The bag file is parsed, interpolated, and
processed for data analysis...................................................................24
Fig. 20 Motor velocity and gyro frequency response, as seen by Vicon motion-
detection system as input: Frequencies after 10 Hz were shown to
prove the decrease in coherence out of controlled motion frequencies.
Gyroscope shows a flatter magnitude response and higher coherence
than motor velocity; therefore, it was chosen to be the ground truth. .26
Fig. 21 Unbent photodiode output vs. motor shaft azimuthal position:
Photodiode outputs increase as they pass by the light source. FOVs are
not overlapping. ...................................................................................27
Fig. 22 Bent photodiode output vs. motor shaft azimuthal position: Photodiode
FOVs are partially overlapping, which is required for the ocellar
sensor to work. In this (incorrect) configuration, there are angles where
simulated roll motions do not produce any change in the photodiode
outputs. .................................................................................................27
Fig. 23 Ocelli in valid range: (above) symmetric photodiode raw output;
(below) gyro and ocelli output for motor azimuthal position (–0.2 to
0.2 radians). Ocelli output is in agreement with gyro in this range. ....28
Fig. 24 Ocelli in invalid range: (above) asymmetric photodiode raw output;
(below) gyro and ocelli output for motor azimuthal position (–1.2 to
0.2 radians). Ocelli output is not in agreement with gyro in this
range. ....................................................................................................29
Fig. 25 Band-pass filter simulated AC transfer characteristic at 0.1–10 Hz:
Magnitude increases with 20 dB/decade. Phase drops from –90° to –
125° at the end of 10 Hz. .....................................................................30
Fig. 26 Right and left band-pass filter measured AC transfer characteristics at
0.1–10 Hz: Magnitude and phase plots are in agreement with
simulation (Fig. 28). .............................................................................30
Fig. 27 LED sweeping: LED was taped to photodiode and power-supply
signal is swept between 3 and 150 Hz. ................................................31
Fig. 28 Right and left band-pass filter simulated transfer characteristics
between 1 and 100 Hz; simulation is shown to compare with LED
sweeping results in Fig. 32...................................................................32
Approved for public release; distribution is unlimited.
vii
Fig. 29 Right band-pass filter measured transfer characteristics in response to
LED chirp between 3 and 100 Hz: magnitude increases 20 dB/decade
and phase drops from –105° to –180° (in agreement with simulation in
Fig. 31). ................................................................................................32
Fig. 30 Ocelli frequency response with respect to gyro as input: Frequencies
after 10 Hz were shown to prove the decrease in coherence out of
controlled motion frequencies. Ocellar magnitude is relatively flat,
showing around 1dB difference from beginning to end. Phase delay
reaches to –15° at 10 Hz. .....................................................................33
Fig. 31 Time signals of gyro, ocelli, and optic flow in 0.5-, 1-, 5-, and 10-Hz
windows; all sensor outputs are scaled to match gyro (rad/s) at each
window .................................................................................................34
Fig. 32 Frequency response of optic flow with respect to gyro as input:
Overall magnitude decrease is 1.42 dB. Phase delay reaches to –35° at
10 Hz. ...................................................................................................35
Fig. 33 Ocelli–gyro mapping plot shows the expected ocelli output (V) for a
given gyro measurement (rad/s). Ocelli output is monotonically
increasing with increasing gyro values. ...............................................36
Fig. 34 Optic flow frequency response with different frame rates, as seen by
input gyro: As the frame rate decreases, roll-off at higher frequencies
is steeper. Higher frame rate results in better coherence. Phase delay
does not change due to frame rate. .......................................................37
Fig. 35 Optic flow frequency response with different window sizes (w = 10,
20, 30, 40), as seen by input gyro: Very small windows (10 × 10 pixel)
result in erroneous magnitude response. Magnitude response and
coherence improve as window size increases, phase delay remains the
same. ....................................................................................................39
Fig. 36 Optic flow frequency response with different window sizes (w = 50,
60, 70), as seen by input gyro: After 50 × 50-pixel window,
magnitude, phase, and coherence plots do not change. .......................39
Fig. 37 Camera scene (10 × 10 feature points) ................................................40
Fig. 38 Camera scene (4 × 4 feature points) ....................................................40
Fig. 39 Optic flow frequency response with different number of feature points
(f), as seen by input gyro: 2 × 2 feature points result in erroneous
magnitude plot. As the feature points increase, magnitude and phase
plots do not show much change; however, coherence improves. ........41
Fig. 40 Light source input power vs. ocelli peak-to-peak amplitude:
Luminance increase linearly increases the peak-to-peak amplitude. DC
light source is specified in Table 3. .....................................................42
Fig. 41 Bending illustration: The photodiodes should share an intersecting
FOV toward the light source for the sensor to operate. Bending values
30< β < 45 were observed to give symmetric photodiode outputs. Β
= 90 completely overlaps the FOVs, without distinct horizons for
each photodiode. ..................................................................................43
Approved for public release; distribution is unlimited.
viii
Fig. 42 Illustration of complementary filter .....................................................45
Fig. 43 Frequency response ocelli, optic flow, and their complementary
fusion: Fourth-order Butterworth filter was used to high-pass ocelli
and low-pass optic flow. The normalized cutoff frequency had to be
kept very small to make use of ocelli’s relatively flat magnitude and
less-delayed phase. Fused response shows coherence is better than
optic flows. ...........................................................................................48
Fig. 44 Frequency response ocelli, optic flow, and their weighted-average
fusion: Ocelli and optic flow time-domain signals are combined to
obtain a result close to ocelli. ...............................................................49
Fig. 45 Magnitude response of ocelli with different luminance values and
optic flow at 30 fps: Increasing luminance implies higher magnitude
for ocelli (L1 < L2 < L3 < L4 < L5). Ambient luminance change brings
adaptive gain necessity. Upper figure is the magnitude-scaled versions
of ocelli response, not derived from real luminance values.................50
Fig. 46 Hypothetical sensor decision approach: Adjust ocelli gain by
continuously computing error between gyro/OF and ocelli; check if
ocelli is valid to use by comparing gyro/OF; use ocelli if comparisons
allow. ....................................................................................................51
Fig. 47 Hypothetical ocelli gain adjustment approach: Gains > 1 are tuned by
noninverting op-amp. Gains < 1 are tuned by voltage divider. The
tuned outputs are compared with lookup table and microcontrollers
iteratively tune the digital potentiometers until error threshold is low
enough. .................................................................................................52
List of Tables
Table 1 Circuit components ................................................................................9
Table 2 Band-pass filter characteristics ............................................................10
Table 3 Experiment components.......................................................................20
Approved for public release; distribution is unlimited.
1
1. Introduction
1.1 Motivation
The design of sensing mechanisms for small unmanned aircraft systems (sUAS)
has many tradeoffs due to limited budgets for power consumption, size, weight, and
the need for both speed and accuracy in a wide range of operating conditions.
Traditionally, inertial measurement units (gyroscopes and accelerometers) are used
to obtain velocity and position data. There has been a rapid evolution of these sensor
systems in recent years toward integrated accelerometer and gyroscope packages
that include both digitization and signal conditioning (e.g., filtering). As the vehicle
sizes have continued to decrease, faster sensing is needed due to the increased
susceptibility of the aircraft to tiniest of disturbances.
Looking to nature, several species of flying insects have been demonstrated to
possess exceptional stability and acrobatic capabilities that match the types of
missions that engineers are trying to accomplish. They provide examples of robust
stability given similar limitations of sensing and processing. The insect body is a
multimodal sensor network. Information from visual, proprioceptive, tactile, and
inertial receptors is collected to provide information about the state of the insect
with respect to its environment.1 Instead of the digital architecture used in
traditional sUAS, insects have analog connections between their sensory systems
and their flight motor neurons. Analog architecture makes them capable of closing
feedback loops at high speeds, becoming very useful for fast stabilization for
sudden disturbances. Bio-inspired sensing techniques based on these species
present an attractive way for microaerial-vehicle sensor design.
Many flying insects employ 2 visual systems, the compound eyes and the ocelli
(simple eyes). From the behavioral and electrophysiological experiments cited in
the next sections, the compound eyes and ocelli are thought to work together.
Overall, compound eyes are sensitive to a wide range of information, such as
proximity to obstacles, relative velocity, and rotation rate.2–4 These tasks increase
the information-processing time for the compound eyes, making them unable to
provide fast responses for sudden disturbances. Insects have to balance themselves
quickly to survive. Ocelli, responsible for a fewer number of tasks compared to
compound eyes, have less processing time,5,6 which makes them favorable to detect
sudden disturbances. Inspired by the complementary nature of ocelli and compound
eyes, this report attempts to characterize the frequency response of an ocellar sensor
and optic flow, and ultimately proposes the fusion of 2 sensors for low-cost, wide-
field, visual rotational motion sensing.
Approved for public release; distribution is unlimited.
2
1.2 Contributions
The contributions of this report are listed as follows:
The comparative open-loop frequency characterization of optic flow and a
luminance-dependent analog rotation-rate sensor that is thought to mimic
insect ocelli was conducted.
Sensitivity analysis was done to analyze the parameters that affect the optic
flow and ocellar sensor performance in rotational motion.
2. Background
2.1 Structure and Function of Compound Eye
The compound eyes and ocelli are shown in Fig. 1, head of a flying insect (Polistes).
The structure of compound eyes (large, 2 on the sides) is seen in Fig. 2. The
compound eyes are composed of units called ommatidia. Each ommatidium unit
functions as a separate visual receptor, consisting of a lens, cornea, a crystalline
cone, light-sensitive visual cells, and pigment cells (Fig. 3). There may be up to
30,000 ommatidia in a single compound eye. The image perceived is a combination
of inputs from ommatidia pointing at slightly different directions (as seen in Fig. 2,
ommatidia units make up a conic surface). A mosaic-like vision of the environment
is rendered.2,3
Fig. 1 Insect compound eye and ocelli (image used with Wikimedia Commons
permissions: https://commons.wikimedia.org/wiki/File:Polistes_ocelli.jpg)
Approved for public release; distribution is unlimited.
3
Fig. 2 Structure of compound eye (image used with Wikimedia Commons permissions:
https://commons.wikimedia.org/wiki/File:Insect_compound_eye_diagram.svg)
Fig. 3 Structure of ommatidium (image used with permission from Cronodon.com:
http://cronodon.com/Copyright.html)
Vision process starts at ommatidia. Ommatidia photoreceptors capture patterns of
luminance from the visual environment. The captured signal is conditioned through
lamina plate. The output of lamina is thought to be the input to medulla.7,8 The
medulla outputs optic flow-like patterns to lobula, and lobula processes these
outputs.9–13 The output signals of lamina neurons are segregated into different
pathways, performing functions such as color discrimination, motion detection, and
intensity encoding. Neurons responding to motion are found in lobula. They are
thought to receive inputs from hypothetical neural elements called Reichardt
Detectors, or elementary motion detectors (EMDs), residing in medulla and
calculating motion from the pixel-based information with a mechanism called
Approved for public release; distribution is unlimited.
4
Reichardt correlation.14 This hypothetical mechanism is proposed to understand
how a neuron, which is only receiving luminance input, is able to compute motion.
Frye.15 depicts the key components of this algorithm, which are 2 inputs (red, as
photoreceptors), a time delay on one input (d), and multiplication on correlated
signals.
1) Photons from a visual scene move from left to right.
2) Photons activate the first receptor.
3) The signal from the first receptor is delayed with d as the photons move to
the second receptor.
4) Photons activate the second photoreceptor. Both the delayed signal (from
first receptor) and the undelayed signal (from second receptor) converge
simultaneously onto a multiplication stage, producing a signal related to
direction of motion. Conversely, photons passing from right to left will
output zero for the opposite direction, since there is no delay component
that will deliver simultaneous inputs to multiplication stage.
2.2 Structure and Function of Ocelli
Ocelli differ from the compound eye in that they have only a single lens covering
an array of photoreceptors, as seen in Fig. 4. Ocelli are found in the frontal surface
of the head of many insects. Ocelli tend to be larger in flying insects (bees,
dragonflies, locusts) and are typically found as a triplet. Two lateral ocelli are found
in the left and right of the head, while a median ocellus is directed frontally.
Fig. 4 Ocellus cross section (image used with Wikimedia Commons permissions:
https://commons.wikimedia.org/wiki/File:Insect_ocellus_diagram.svg; see Ref. 16)
Approved for public release; distribution is unlimited.
5
Various studies have been conducted to reveal the function of ocelli for different
insects. Although it is called an eye, ocellus is claimed to be underfocusing the
image, hence showing hardly any image details for a locust. In contrast to the
“underfocusing” information for the locust,17 suggests that the dragonfly ocellus,
which is believed to be highly evolved, is able to detect some image details.
It is also suggested that the temporal and spatial filtering characteristics of locust
ocelli neurons are well suited to detect instability in flight.18,19 The stabilization in
flight studies were summarized by Heisenberg and Reinhard,4 most of which
studies are conducted by releasing dragonflies with ablated ocelli. Dragonflies
show unstable flight attitudes without ocelli. Kastberger and Schumann20 evaluate
the flight behavior of bees with and without occluded ocelli, stating that normal
bees (without occlusion) show quicker flight behaviors.
Another characteristic of ocelli is higher photic sensitivity, compared to the
compound eye for locusts18 and bees.21 This information is useful if we think of the
ocelli as integrators of the overall intensity or a blurred visual field. If the photic
sensitivity is high, small changes in light intensity will be sensed. Taking into
account that the images sensed by the ocelli are highly blurred, ocelli should be
concerned with the overall image intensity. Studies of dragonflies22,23 and locusts24
claim that ocelli are rotation detectors, important for gaze stabilization. Research
by Schuppe and Hengstenberg25 also shows gaze stabilization cues by the ocelli.
The beginning and end of daily activities of insects depend on light intensity.
Studies of bees,23 crickets,26 and moths27–29 claim that the ocelli perceive low light
intensity to control daily activities.
Compared to ocelli, compound eyes offer a panoramic field of view (FOV) and
high temporal resolution, with optic flow computation abilities.30–32 These features
are beneficial for tasks like visually guided navigation (e.g., obstacle avoidance,
landing strategy, saccade response, hovering strategy clutter response, collision
response, and fixation strategy) each of which is described by Barrows et al.33 with
specific test setups for bees and drosophilae.
2.3 Prior Works Inspired by Insect Ocelli
Because of the prominent computation advantages, simplicity, and applicability to
small-scale world, ocelli-inspired vision sensors have been developed by many
groups. These implementations mainly focus on closed-loop attitude control,
outputting pitch, and roll angle. Neumann and Bülthoff34 present a simulation
model of an autonomous agent flying through a virtual environment with a daylight
sky model. It uses over 200 receptors to detect local intensities. These receptors are
distributed evenly between adjacent directions on an agent body coordinate system.
Approved for public release; distribution is unlimited.
6
The average intensity difference between 2 directions is computed to estimate the
roll angle. Subsequently, a simulation of an eye model with a special receptor
distribution was presented in a virtual environment in their 2002 report.35 The
ocelli-like “wide-field measurement units” that use a locally weighted intensity as
receptor response are subtracted in adjacent directions. Using an EMD and ocelli
outputs, optimal receptive fields for attitude estimation, yaw rotations and nearness
are derived. A 2003 study36 implements ocelli, haltere (an organ responsible for
balance in flying insects), optic flow, and magnetic flow sensors for a
micromechanical flying insect. These sensors were used to estimate body attitude
relative to a fixed frame, body rotational velocities, obstacle avoidance, and
heading adjustment, respectively. The ocelli consist of 4 photodiodes, arranged in
a pyramidal configuration. The 2 output signals are obtained by subtracting the
opposite photodiode outputs. Schenato et al.37 use this implementation and
proposes a stabilizing attitude control law for a sinusoidal intensity function. Javaan
et al.38 demonstrate an embedded implementation of ocelli-like sensor. It uses the
difference between ultraviolet and green photodiode signals to obtain attitude
estimation, stating that this reduces the biasing effect of clouds and the sun. Kerhuel
et al.39 use a camera to track a reference heading point and perform gaze
stabilization by using the difference between reference and instantaneous heading
signal. Moore et al.40 use camera images that are classified by the intensity
information in the YUV (luminance, blue, red) channel into sky and ground regions
to estimate roll and pitch angles. Javaan and Akiko41 use 4 ultraviolet/green
photodiode pairs to detect attitude angle and demonstrates roll attitude tracking on
an aircraft. A 2014 study42 proposes an ocelli-based sensor, which is also used in
this work, to output roll and pitch rate, rather than angle. This sensor was used for
the frequency characterization in Section 3 of this report.
3. Frequency-Domain Characterization of Ocellar Sensor and
Optic Flow
3.1 Introduction
Gremillion et al.42 present experimental data that use the complementary response
of an analog ocellar sensor and a commercial optic flow sensor. Inspired by this
complementary response information and cues from insect ocelli and compound
eye complementary task mechanism, we designed a test platform that generates
rotational motion to characterize the frequency-domain response of both optic flow
and an ocellar sensor, and gathers information from different sources such as motor
controller, microcontroller, and gyroscope. The optic flow is computed using the
images collected by a camera and fisheye lens. A microelectromechanical systems
Approved for public release; distribution is unlimited.
7
gyroscope and a Vicon motion-detection system are used as ground truth. This
section discusses the ocelli and optic flow frequency response characteristics and
the performance parameters for the ocelli and optic flow computation.
3.2 Ocellar Sensor
The ocellar sensor (based on work by Gremillion et al.42) gives roll and pitch rate
estimates using the luminance change sensed by rightleft or frontback
photodiode pairs. The luminance signals from left and right photodiodes are band-
pass filtered. The high-pass filter portion serves as the differentiator element to
estimate rate information introduced by luminance change. The high-frequency
cutoff was added to reject the flickering noise for indoor usage. The filtered signals
from the photodiodes are antagonistically subtracted from each other (leftright or
frontback) to obtain roll and pitch rate estimates. The overall circuit schematics
are shown in Fig. 5 and the circuit components are listed in Table 1.
Approved for public release; distribution is unlimited.
8
Fig. 5 Circuit schematics of the 3-stage ocellar sensor. TSL14S photodiode outputs are
band-pass filtered and antagonistically subtracted. Pitch rate (frontback) is inverted for sign
change.
Approved for public release; distribution is unlimited.
9
Table 1 Circuit components
Component
Value/part number
R1,3,9,11
1.1
R5,6,7,8,13,14,15,16,17,18,19,20
1 kΩ
R2,4,10,12
20 kΩ
C1,3,6,8
1 μF
C2,4,7,9
470 nF
Operational amplifier
ISL28208
photodiode
TSL14S
Vdd
5 V
The circuit consists of 3 stages:
1) Light-to-Voltage Conversion
Light-to-voltage conversion by a TSL14S optical sensor43 that combines a
photodiode and a transimpedance amplifier. The sensor has a wideband spectral
response characteristics between 320 and 1050 nm. Its peak output is at 640 nm.
The output voltage from this element is the electrical equivalent of luminance seen
by the photodiode.
2) Band-Pass Filtering
This stage consists of an active bandpass filter with a designed high-pass cutoff at
17 Hz and low-pass cutoff at 145 Hz (see Fig. 6).
Fig. 6 Band-pass filter, with high-pass cutoff at 17 Hz and low-pass cutoff at 145 Hz
The input–output relationship of a high-pass filter is modeled as
()=
() . (1)
Approved for public release; distribution is unlimited.
10
The input voltage is the luminance value from the TSL14S package. The output
voltage approximates the luminance time rate of change. The function of the low-
pass filter is to attenuate high-frequency noise. The final band-pass filter transfer
function is in Eq. 2.
(()
()=
2+(+)+1 . (2)
The characteristic quantities of this second order transfer function are the low-pass
cutoff frequency , high-pass cutoff frequency , and maximum input–output
gain , specified in Table 2.
Table 2 Band-pass filter characteristics
=1

106 rad/s (16.9 Hz)
=1
909 rad/s (145 Hz)
=
-18.2
3) Linear combination stage:
This stage includes a difference amplifier to subtract rightleft filter outputs and
frontback filter outputs. The difference amplifier output from the rightleft inputs
estimates the roll rate. The difference amplifier output from the frontback inputs
is inverted (for sign change) by an inverting amplifier. Inverting amplifier output
estimates the pitch rate (see the blocks in Fig. 7).
Fig. 7 Subtractor and inverter: Subtractor is used for antagonistic subtraction of filtered
signals. Inverter is used for sign change for pitch rate. For equal resistors in both blocks, direct
subtraction and direct inversion is satisfied.
For R1,2,3,4,5,6 = 1kΩ, the outputs Vo_diff and Vo_inv are modeled as
_=21 (3)
_= . (4)
Approved for public release; distribution is unlimited.
11
The bandpass filter was simulated (Fig. 8) using the macro model of the ISL28208
operational amplifier in the Tina TI SPICE-based simulation program. The
simulated circuit and alternating current (AC) transfer characteristics for
frequencies between 1 mHz and 1 MHz are seen in Fig. 9. According to the
simulation (due to the zero in the denominator of the transfer function) the
amplitude is increased by 20 dB/decade until it hits the first pole. The high-pass
–3 dB frequency is seen as 13 Hz. The maximum amplitude is 5.58 dB around 55
Hz. The low-pass –3 dB frequency is seen as 175 Hz.
Fig. 8 Simulated circuit in TI TINA simulation software
Fig. 9 Band-pass filter simulation results: 13.78 Hz high-pass cutoff and 174 Hz low-pass
cutoff is observed. Phase starts at 90° at 1 mHz and reaches to 270° at 100 kHz.
Approved for public release; distribution is unlimited.
12
3.3 Mathematical Model for the Ocellar Sensor
With reference to Fig. 10, the variables used to explain the ocellar sensor are as
follows:
γ: Azimuth angle of the photodiode
: Angular position
: Photodiode field of view
=
=: Angular speed
(): Luminance, assume periodic
: Light source field of luminance
Fig. 10 Mathematical model and assumptions: Photodiode in rotational motion sees the
arbitrary luminance pattern as its azimuthal angle varies
The test setup has a DC spotlight source, which acts as the sun”. For simplicity,
luminance is modeled as a rectangular function with fixed edges from /2 to /2.
Photodiode FOV is also modeled as a rectangular function with edges at FOV
edges, /2 to /2
(,): Photodiode field of view function modeled as a rectangular filter with
edges at /2 and /2
: Photodiode output (or signal input to bandpass filter)
Approved for public release; distribution is unlimited.
13
*: convolution operation
The photodiode integrates the luminance in its FOV, as (,)
=()=()(,)=()
 . (5)
Let () be a photodiode output taken at the azimuth angle. While the circuit is
in rotational motion with angular speed , this photodiode output becomes
() at time t. Thus, the photodiode output encodes both spatial () and
temporal () information.
()(). (6)
Since the photodiode output is in 2 domains, the Fourier transform with respect to
both spatial and temporal variables should be taken in order to express it in Fourier
domain.
(,): denotes the function in spatial and temporal domain
(,): denotes the Fourier transform (,)
(),
, . (7)
Properties used:
Shifting property in time/space and Fourier domain:
()
 . (8)
Convolution:
()()
 . (9)
Taking the Fourier transform with respect to spatial variable :
()
 . (10)
Then, taking the Fourier transform with respect to temporal variable t gives the
frequency domain of a rotational motion:
,=
+ (11)
(),
+ . (12)
Approved for public release; distribution is unlimited.
14
This means that all the energy of the rotating photodiode output is contained in a
plane of the spatiotemporal frequencies domain.44,45 The equation of this plane is
+= 0 . (13)
The rectangular luminance function can be described as ()~(,)
with space and time axis. A rectangular pulse in space rest(x) has sinc form in
frequency domain:
()
()
 . (14)
The corresponding frequency spectrum appears as a cut of spatial spectrum46
()
 by a wall of Dirac situated in the direction of += 0
Overall,
,=
+ . (15)
Equation 15 represents the photodiode output given as input to the bandpass filter
in the circuit. Bandpass filtering is a temporal process; thus, the bandpass function
has only temporal variable. Let the bandpass filter transfer function in time and
Fourier domain be defined as
()

()
() . (16)
From the ocellar sensor section, 
()is defined as

()=()
()=
() . (17)
The output of the bandpass filter can be written as
,(,)=()() . (18)
The Fourier transform of the output becomes
,(,),
,
, (19)
()(),
+
() . (20)
This output represents the luminance output filtered by one photodiode. Assume
another photodiode is has a different azimuth angle, , offset by from the
first photodiode. The luminance perceived by it will be
Approved for public release; distribution is unlimited.
15
,(),
,
,, (21)
,,(,,),
,,
,, (22)
,
,,=,
(+) . (23)
After bandpass filtering, the Fourier transform of the second output becomes
,,
,,=,
+
(). (24)
The difference amplifier implements direct subtraction between 2 bandpass filter
outputs as the roll rate estimation. Let the roll rate be denoted as p(,t)
p(, t)=,(,),,(,) (25)
p(, t),
P
, (26)
P
,=,
,,,
,,=
+
()
,
+
()=
()+[

,
] . (27)
Thus, the circuit output depends on
Photodiode FOV
Bandpass filter characteristics
Luminance function ()
Photodiode angular separation
The photodiode field-of-view and bandpass filter characteristics are inherent in the
circuit under test, and these variables are fixed. We have control of the luminance
function () and photodiode angular separation . The luminance function is also
dependent on the light intensity (or, the input power given to light source).
3.4 Optic Flow Computation
Optic flow is an approximation of apparent motion of brightness patterns observed
when an observer (i.e., camera) is moving relative to the objects it images. Optic
flow methods try to calculate where a pixel in Image A goes to in a consecutive
Image B. In 2 dimensions, optic flow specifies how much a pixel of an image moves
between adjacent series.47 The basis of optic flow is the brightness constancy
equation, which eventually forms the 2-D motion constraint.
Approved for public release; distribution is unlimited.
16
Assume that I(x,y,t) is the intensity of pixel positioned at location (x,y) in a frame
taken at time t. In the frame taken at time (t+Δt), this pixel moves to the location
(x+Δx, y+Δy) (see Fig. 11).
Fig. 11 Optic flow vector for a pixel between 2 consecutive frames
Assuming the brightness of the pixel does not change over time;
(,,)=(+,+,+) . (28)
Performing first-order Taylor Series expansion about I (x, y, t):
(+,+,+)=(,,)+
+
+
+
 . (29)
Assuming very small motion and ignoring the higher-order terms,

+
+
= 0 . (30)
Dividing everything by :


+

+

= 0 . (31)
Denoting:

=,
=,
= (32)

= 
=
++= 0 . (33)
Here, and are the x and y components of optic flow (for the motion described
in Fig. 14). The equation can be written more compactly as
Approved for public release; distribution is unlimited.
17
,,= (34)
= (35)
where=, is the spatial intensity gradient and =, is the velocity
of the pixel (x,y) at time t. Equation 35 is called the 2-D motion constraint equation.
This equation has 2 unknowns (,), which relates to the aperture problem. If
the motion detector’s aperture is much smaller than the contour it observes, it can
only be sensitive to the component of the contour’s motion that is perpendicular to
the edge of the contour. It is blind to any motion parallel to the contour. This is
because the movement in this direction will not change the appearance of anything
within the aperture. To find optic flow vectors, additional equations are needed.
Many optic flow computation methods focus on additional constraints that attempt
to recover the optic flow vectors. Lucas and Kanade48 assume that the displacement
of the image contents between 2 frames is constant within a neighborhood of a point
under consideration. Horn and Schunck49 assume smoothness in the flow over the
whole image and prefers solutions that show more smoothness.
For the LucasKanade motion algorithm,48 the 2-D motion constraint equation is
assumed to hold for all pixels within a window centered at p. This means that the
motion constraint equation holds for all the pixels in a window with the same
unknowns =,. This set of equations brings an overdetermined system
that has more equations than unknowns:
()+()=()
()+()=()
… … … … … …
… … … … … …
… … … … … …
()+()=()
where ,,….. are the pixels inside the window. In matrix form:
=, where
=
() ()
()()
 
()()
=
=()
()
().
Least squares principle can be applied to solve this overdetermined system:
= (36)
Approved for public release; distribution is unlimited.
18
= () (37)
=()
()()
()()
()
()()
()() . (38)
Optic flow vectors and are searched in a tracking window, and the best match
is found using the least squares method. This system is solvable if is invertible.
The eigenvalues of (,>) should not be too small, and
should be well-conditioned (
should not be too large), so > should be
somewhat similar to each other in magnitude. In other words, very small
eigenvalues are interpreted as flat surfaces, and eigenvalues or
are interpreted as edges”. Optimum eigenvalues should be large enough and have
similar amplitude.50
One drawback of the LucasKanade algorithm is that it theoretically fails for large
motions. If the motion is too large, higher order terms may dominate Eq. 29 (the
first-order Taylor Series Expansion). Reducing the image resolution may solve this
issue. A pyramidal approach is available to convert large motions to small
motions.51
Different optic flow computation methods can be described as either “dense” or
“sparse.” From a performance point of view, dense computation methods (e.g.,
Horn and Schunck49 and Farnebäck52) that process all of the pixels in the image are
slow for real-time applications. Instead, sparse techniques (i.e., Lucas and
Kanade48) only process the pixels of interest. For real-time applications that use
optic flow computation to feed the current state of an object back to a control loop,
sparse techniques may be preferred over dense techniques due to faster
computational performance (and, thus, higher sampling rate). In practice, we
achieved 60 frames per second (fps) using the LucasKanade algorithm but only
13 fps for the Farneback algorithm (376 × 240 pixels 8-bit monochromatic image
sequence). For this work, the LucasKanade algorithm is used with predefined
feature points distributed over the imagery. The feature points are the center pixel
points to run the LucasKanade algorithm for determining an optic flow vector. As
the number of feature points increases, so does the number of optic flow vectors.
The x-component of optic flow vectors are summed to obtain a single optic flow
value.
3.5 Experimental Setup
In general, from Fig. 12, the optic flow experienced by an imager is33
=+ (/) (39)
Approved for public release; distribution is unlimited.
19
where is the angular and is the translational velocity of the vehicle, D is the
distance to an object, and is the angle between direction of travel and direction of
object. If the translational component (V) is zero, optic flow is proportional to the
angular velocity.
Fig. 12 Optic flow during rotational and translational motion: without translational
component (V), optic flow is an estimate of only angular velocity (ω)
A mechanism was constructed to characterize optic flow and ocellar sensor over
0.1–10 Hz rotational mechanical input. Figure 13 shows the illustration of the test
setup; Fig. 14 shows the scene the camera sees; Table 3 shows the system
components; and Figs. 15 and 16 show the general and close-up views of the
components. The block diagram of the system is shown in Fig. 17.
Fig. 13 Illustration of test setup: Light source has its own DC supply to avoid issues of
flickering; information from camera, microcontroller unit (MCU), gyro, and analog-to-digital
converter (ADC) are transferred to the host computer via a USB hub.
Approved for public release; distribution is unlimited.
20
Fig. 14 Camera scene (376 × 240 pixel image): DC light source is not in the FOV of the
camera, which is moving along the x-direction
Table 3 Experiment components
Equipment
Model/manufacturer
Light source
LED1 OOWA-56 LED Video light
Light-source
supply
GW-Instek-PSW-8027 Programmable switching DC power supply
Motor
Animatics Smartmotor SM2340D
Motor supply
PS42V6AG-110, 251 W, Moog, Animatics
Signal generator
Tektronix AFG3252
Camera
UEYE UI-1221LE-M-GL USB 2.0,752 × 480, CMOS mono, 87.2 fps, 8-
bit
Lens
Sunex DSL227 Miniature superfisheye lens, 180FOV
Microcontroller
Arduino UNO
ADC
MCP3008 8-channel 10-bit ADC with serial peripheral interface (SPI)
USB hub
Hosa Technology
Gyroscope
Pololu MinIMU-9 v3 chip contains L3GD20H 3-axis gyro
Approved for public release; distribution is unlimited.
21
Fig. 15 Overall test setup: Ocellar sensor is positioned in front of light source. Motor is
giving rotational motion to the setup along its shaft axis. The motor shaft is in vertical
orientation, moving the components on it.
Fig. 16 Test setup components: Camera sees the scene shown in Fig. 17. Camera on the right
is not used due to performance issues.
Approved for public release; distribution is unlimited.
22
Fig. 17 System block diagram: All of the data collected are stored in laptop
All data are stored in a laptop running Ubuntu 14.04 environment. The ROS
environment is used to implement robot software. ROS is an open-source network
for writing robot software, including a collection of tools, libraries, and
conventions. It allows for compact storage and data publishing from multiple
peripherals. For this system, each component is represented by a different ROS
“node” that allows for the compilation of multiple C++ files and stores data in a
“bag” file. Once the data are collected, the bag file is “unbagged” and parsed to
extract the data.
To record the ocellar sensor analog voltage outputs, an ADC board is used. For
ground truth, a 16-bit gyroscope is also used. The ADC board (MCP3008)
communicates with the Arduino UNO microcontroller via SPI. Gyroscope
communication is through interintegrated circuit technology (I2C). The
microcontroller is programmed in C language. It reads ADC and gyroscope outputs,
and parses the values into most significant (MSB) and least significant (LSB) bytes.
Each sample consists of one MSB and one LSB (i.e., a 2-byte word). A message is
created (see Fig. 18) with 2 header bytes to be sent to the laptop via serial
communication at 115200 bps. A 15-ms delay is added between each byte sent to
allow the receiver buffer to be cleared to avoid overwriting.
Fig. 18 Serial message structure from ocelli to microcontroller includes 2 header, ocelli data,
and gyro data bytes
Approved for public release; distribution is unlimited.
23
The servo motor can be operated in either position or velocity mode. Velocity mode
does not offer control in position. The motor is controlled by sending serial
messages in Ani-Basic language. (The command information and serial
communication are specified in the developer’s guide.53) For both position and
velocity modes, specific trajectory files are created in MATLAB, including
velocity/position trajectory (e.g., sine wave, square wave) and acceleration
information. These trajectories are recorded as text files and read by a C++ code
that communicates with the motor in Ani-Basic Language via serial communication
at 115,200 bps. The commanded position, velocity, acceleration and real-time
position, velocity, and acceleration values can be read back from the motor
controller. Reading the motor values allowed us to validate the gyroscope output
and to see whether the ocelli circuit is in desired position or not. To measure
frequency response characteristics, concatenated sine waves are sent as velocity
trajectory at different frequencies at velocity mode. To understand the ocellar
sensor validity (explained in the next section), step inputs are sent at position mode.
After storing the gyro, ocelli, Vicon, and raw imagery data in a bag file, post-
processing occurs (and is summarized in Fig. 19). Using the raw imagery in the bag
file, optic flow field vectors are computed with another C++ code, using the
OpenCV LucasKanade algorithm. A text file is generated that includes the time
stamps and optic flow x and y vectors for each image. For the other data, the bag
file is unbagged and parsed to extract the ocelli, gyro, Vicon, motor data, and
related timestamps. Optic flow vectors generated from the LucasKanade
algorithm are also parsed. The text file contains 60-fps optic flow information. To
obtain lower frame rate results, the optic flow values are downsampled to factors
of 60. Since we have data coming from different sources at different sampling rates,
synchronization is necessary using a common time vector. A common time vector
for all data is created with the lowest sampling time possible, which is the
microcontroller sampling time, 0.003 s. All data are interpolated using this time
vector.
Since the communication modules are developed for embedded platforms, it will
not take much effort to transfer this system to onboard computers used on quadrotor
helicopters (quadcopters) through Secure Shell (SSH). The time synchronization
can be carried out with firmware that force ones computer’s clock to follow another
one (e.g., “chrony” synchronization for ROS).
Approved for public release; distribution is unlimited.
24
Fig. 19 Post-processing block diagram: Optic flow vectors are computed and extracted as a
text file. The bag file is parsed, interpolated, and processed for data analysis.
3.6 Magnitude-Squared Coherence
The spectral coherence is a measure that can be used to examine the relationship
between 2 signals (or data sets). It is commonly used to estimate the power transfer
between the input and output of a linear system. The magnitude-squared coherence
between two signals x(t) and y(t) is defined as
()=|()|
()() (39)
where () is the cross-spectral density between x and y, and (), () are
the auto-spectral density of x and y, respectively. If the signals are ergodic
(statistical properties can be deduced from a sufficiently long process) and the
system function linear, the magnitude-squared coherence function estimates the
extent to which y(t) may be predicted from x(t) by an optimum linear least squares
function.54 The transfer functions and operations described for the mathematical
model of the system, itself, and ocellar sensor transfer characteristics are linear.
Thus, we expect the system to be linear. The magnitude-squared coherence is added
to the frequency response plots as a performance parameter showing linearity.
For an ideally linear system:
()=()()()=()() (40)
()= |()|() (41)
()= |()|() (42)
()=|()()|
()()=|()()|
()|()|= 1 (43)
Approved for public release; distribution is unlimited.
25
where () is the impulse response and () is its Fourier transform.
Values of coherence satisfy 0()1. If there is a perfect linear relationship
between x and y at a given frequency, ()= 1. If  is less than one but greater
than zero, it is an indication that either noise is an inherent component of the system
measurement, that the assumed function relating x(t) and y(t) is not linear, or that
y(t) is producing output due to input x(t) as well as other inputs. If the coherence is
equal to zero, it is an indication that x(t) and y(t) are completely unrelated.
In the physical world, a perfect linear relationship is rarely realized. In practice,
coherence values higher than 0.5 are acceptable for testing linear systems. For the
experiments described later, the coherence values dip down at specific frequencies,
around 1–2 Hz, for all measurements. Although the input sine waves includes these
frequencies, it is believed that the motor was not successful at implementing these
frequencies. All of the final measurements include coherence values higher than
0.5 to be in a practically acceptable region.
3.7 Ground Truth
The first consideration for ground truth was the Vicon motion-detection system.
However, its cameras strobe at frequencies 50–100 Hz and use reflected infrared
light that is strobed from a ring of light-emitting diodes (LEDs) surrounding each
camera. The bandpass filter circuit is able to pick up these frequency components
of the infrared light, resulting in the corruption of the output signal. Alternative
ground-truth options are the velocity readout from the servo motor controller and
the gyroscope sensor. To choose one as ground truth, a chirp signal between 0.1
and 10 Hz was given to the motor as velocity input, and the comparative responses
of motor velocity readout and gyroscope were verified, with respect to the Vicon
system as input. The ideal response should be a flat curve. As seen in Fig. 20, the
gyroscope provides a more flat magnitude very close to 0 dB, and less phase delay
than motor velocity. The magnitude-squared coherence plot indicates a linear
input–output relationship at an existing frequency. The gyroscope coherence is
higher than motor velocity across all test frequencies; therefore, it is regarded as
ground truth for further analysis. The curves are shown up to 15 Hz to show the
coherence decay outside the test frequencies.
Approved for public release; distribution is unlimited.
26
Fig. 20 Motor velocity and gyro frequency response, as seen by Vicon motion-detection
system as input: Frequencies after 10 Hz were shown to prove the decrease in coherence out
of controlled motion frequencies. Gyroscope shows a flatter magnitude response and higher
coherence than motor velocity; therefore, it was chosen to be the ground truth.
3.8 Understanding Ocellar Sensor’s “Valid Range
The ocelli circuit is assumed to work under a specific luminance pattern to be an
angular rate sensor. Assuming a bright sky and dark ground, when the photodiodes
are looking to the sides, each of them sees a different horizon. One photodiode sees
a brighter patch, while the other one sees a darker patch. For this algorithm to work,
the luminance gradient from the sky to the ground should be constant and negative.
We use a light source with a diffuser to create this artificial sky and horizons. The
diffuser prevents the direct current (DC) source from acting as a point source, by
helping to distribute the light intensity along the diffuser surface. This way, the
light source acts as the sky, rather than the Sun. The photodiodes should see the 2
edges of the source as 2 horizons. This way, when a rotational motion is applied,
brighter-darker patch assumption will be satisfied. The photodiodes should be bent
towards the source to intersect their fields of view. To understand whether an
intersection is created, the azimuthal position of the motor is varied and the
photodiode outputs are checked to see if they share an overlapping FOV. Figures
21 and 22 show the unbent and bent raw photodiode outputs, with respect to the
azimuthal position of the motor. A partially overlapping FOV was achieved by
bending the photodiodes toward the light source.
10 -1 10 010 1
Frequency (Hz)
-40
-20
0
20
Magnitude (dB)
Gyro
Motor velocity
10 010 1
Frequency (Hz)
-40
-20
0
20
Phase (deg)
10 -1 10 010 1
Frequency (Hz)
0
0.5
1
Magnitude-Squared Coherence
Approved for public release; distribution is unlimited.
27
Fig. 21 Unbent photodiode output vs. motor shaft azimuthal position: Photodiode outputs
increase as they pass by the light source. FOVs are not overlapping.
Fig. 22 Bent photodiode output vs. motor shaft azimuthal position: Photodiode FOVs are
partially overlapping, which is required for the ocellar sensor to work. In this (incorrect)
configuration, there are angles where simulated roll motions do not produce any change in the
photodiode outputs.
-2 -1 0123
Motor shaft azimuthal position (rad)
0
0.5
1
1.5
2
2.5
Raw photodiode output (V)
Photodiode 1
Photodiode 2
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
Motor shaft azimuthal position (rad)
0
0.5
1
1.5
2
2.5
3
Raw photodiode output (V)
Photodiode 1
Photodiode 2
Approved for public release; distribution is unlimited.
28
Due to the small size of the light source and small FOV of the photodiodes (around
90° each), the maximum swing of the motion stimulus needs to be small enough to
ensure that the photodiode outputs are changing symmetrically with respect to each
other. To understand the dynamic range circuit’s velocity, the azimuth was varied
with small steps. For each position value, a target velocity was given to the motor
and the circuit output was compared to the gyro output. For the region outside the
light source dominance, the circuit outputs are not reliable.
Figure 23 shows a valid region (azimuthal position changes from –0.2 to 0.2
radians). In this range, photodiode outputs are symmetric to each other and the
ocelli output directionally matches the gyro output.
Figure 24 shows an invalid region (azimuthal position changes from –1.4 to –0.2
radians). In this range, photodiode outputs are not symmetric to each other. Ocelli
is not in agreement with gyro. Using this data, the maximum displacement for the
ocellar circuit is determined to be 1 radian. All of the following characterizations
are done in this valid region.
Fig. 23 Ocelli in valid range: (above) symmetric photodiode raw output; (below) gyro and
ocelli output for motor azimuthal position (–0.2 to 0.2 radians). Ocelli output is in agreement
with gyro in this range.
20.4 20.6 20.8 21 21.2 21.4 21.6 21.8 22
Time (s)
0
1
2
3
4
5
Photodiode raw output (V)
Photodiode 1
Photodiode 2
20.4 20.6 20.8 21 21.2 21.4 21.6 21.8 22
Time (s)
-1
-0.5
0
0.5
1
1.5
Velocity (rad/s) or Position (rad)
Gyro (rad/s)
Ocelli
Motor Position (rad)
X: 21.89
Y: 0.1976
X: 20.55
Y: -0.1966
Approved for public release; distribution is unlimited.
29
Fig. 24 Ocelli in invalid range: (above) asymmetric photodiode raw output; (below) gyro
and ocelli output for motor azimuthal position (1.2 to 0.2 radians). Ocelli output is not in
agreement with gyro in this range.
3.9 Ocellar Sensor Frequency Characterization
Frequency characterization is done in 2 ways. First, to confirm the proper operation
of the bandpass circuit, the raw photodiode output is used as input and filtered
signal is used as output. Although the bandpass circuits can operate up to higher
frequencies, the motor stimulus is limited to 10 Hz; only 0.1–10 Hz data are
obtained for circuit’s motion characterization. To demonstrate that the photodiode
and bandpass filter combination can operate at higher frequencies, an LED was
driven by a signal generator between 3 and 100 Hz.
A second frequency characterization was done to compare the optic flow and ocelli
outputs with the gyro as ground truth. This provided us with information about how
well the sensors operate within the motion frequencies, in comparison to each other
with the same inputs.
3.9.1 Circuit Frequency Characterization
A chirp signal is given as a motor velocity command. Figure 25 shows the simulated
circuit output, and Fig. 26 shows the right and left circuit frequency responses.
Between 0.1 and 10 Hz, the circuit simulation shows 20 dB/decade increase in
64 64.5 65 65.5 66 66.5 67 67.5
Time (s)
0
1
2
3
4
5
Photodiode raw output (V)
Photodiode 1
Photodiode 2
64 64.5 65 65.5 66 66.5 67 67.5
Time (s)
-1.5
-1
-0.5
0
0.5
1
1.5
Velocity (rad/s) or Position (rad)
Gyro (rad/s)
Ocelli
Motor Position (rad)
X: 64.41
Y: -1.4
X: 66.79
Y: -0.1995
Approved for public release; distribution is unlimited.
30
magnitude, starting from –38 dB to 1 dB. The phase delay starts from –90° and
reaches to –125° at 10 Hz. The data from both photodiodes show the similar
response. Right-circuit magnitude starts from –38 dB and reaches to –2 dB.
Left-circuit magnitude starts from –38 dB and reaches to 0 dB. Phase response
reaches to –140° and –124° for right and left, respectively, after starting from –90°.
Fig. 25 Band-pass filter simulated AC transfer characteristic at 0.1–10 Hz: Magnitude
increases with 20 dB/decade. Phase drops from 90° to 125° at the end of 10 Hz.
Fig. 26 Right and left band-pass filter measured AC transfer characteristics at 0.110 Hz:
Magnitude and phase plots are in agreement with simulation (Fig. 28).
10
-1
10
0
10
1
Frequency (Hz)
-40
-30
-20
-10
0
Magnitude (dB)
Right, input: raw, output:filtered
10
-1
10
0
10
1
Frequency (Hz)
-150
-140
-130
-120
-110
-100
-90
Phase (deg)
10
-1
10
0
10
1
Frequency (Hz)
-40
-30
-20
-10
0
Magnitude (dB)
Left, input: raw, output:filtered
10
-1
10
0
10
1
Frequency (Hz)
-130
-120
-110
-100
-90
Phase (deg)
Approved for public release; distribution is unlimited.
31
To demonstrate that the photodiode and bandpass filter combination can operate at
higher frequencies, an LED was taped to one photodiode (see Fig. 27). The
electrical signal (sine wave, voltage-controlled) from the signal generator is swept
from 3 to 100 Hz. Figure 28 shows the AC characteristics simulation from 3 to
100 Hz. The gain starts from –8.59 dB and reaches to 5.6 dB at 50 Hz, then it
decreases to 4.74 dB at 100 Hz. The phase starts from –101° at 3 Hz, decreasing to
–205° at 100 Hz. Figure 29 shows the frequency response of the circuit from the
photodiode input from LED to the filtered output. The magnitude response starts
from 10.25 dB and reaches to –0.29 dB at 50 Hz. Then it decays to –1.281 dB at
98.6 Hz. The phase response starts from –105.8° at 3 Hz, decreasing to –180.5° at
98.76 Hz. Qualitatively, circuit simulation is close to actual data. Circuit simulation
results in 103° phase delay, and data result in 75° phase delay. The circuit gain
increases by 14 dB up to 50 Hz, and real data gain increases by 10 dB.
Fig. 27 LED sweeping: LED was taped to photodiode and power-supply signal is swept
between 3 and 150 Hz.
Approved for public release; distribution is unlimited.
32
Fig. 28 Right and left band-pass filter simulated transfer characteristics between 1 and 100
Hz; simulation is shown to compare with LED sweeping results in Fig. 32
Fig. 29 Right band-pass filter measured transfer characteristics in response to LED chirp
between 3 and 100 Hz: magnitude increases 20 dB/decade and phase drops from 105° to
180° (in agreement with simulation in Fig. 31).
10
1
10
2
Frequency (Hz)
-15
-10
-5
0
5
Magnitude (dB)
Input: Raw (LED chirp), Output:Filtered
10
1
10
2
Frequency (Hz)
-200
-180
-160
-140
-120
-100
Phase (deg)
X: 3.052
Y: -10.25
X: 50.82
Y: -0.385
X: 98.61
Y: -1.281
X: 3.069
Y: -105.8
X: 99.89
Y: -179.8
X: 50.02
Y: -173.5
Approved for public release; distribution is unlimited.
33
3.9.2 Sensor vs. Ground Truth Frequency Characterization
To characterize ocellar sensor frequency response with respect to the gyroscope
(acting as the ground truth velocity), concatenated sine waves were given as
velocity input. The operation was performed in the valid angle range previously
described. The ocelli output is a voltage value. Its magnitude is scaled to match
with the gyroscope. This frequency response is from gyroscope as input, to the full
ocellar sensor as output. Within the test frequencies, the ocellar sensor shows a
relatively stable magnitude around 0 dB. The phase response is degrading over the
frequency range. Figure 30 shows the ocellar sensor frequency response. From 0.1
to 1 Hz, there is almost no phase delay between ocelli and gyro. After 1 Hz, ocelli
shows a phase delay of approximately 15°. The data after 10 Hz were not taken into
account, since the input stimulus cannot exceed 10 Hz. The components at higher
frequencies are due to mechanical noise inherent in the motor. Coherence dips are
happening at the same frequencies with the other experiments; therefore, it is
believed that the motor was not able to implement those frequency components.
Ocelli always shows a coherence above 0.5, which is practically acceptable.
Fig. 30 Ocelli frequency response with respect to gyro as input: Frequencies after 10 Hz
were shown to prove the decrease in coherence out of controlled motion frequencies. Ocellar
magnitude is relatively flat, showing around 1dB difference from beginning to end. Phase
delay reaches to 15° at 10 Hz.
10
0
10
1
Frequency (Hz)
-10
-5
0
5
Magnitude (dB)
10
0
10
1
Frequency (Hz)
-50
0
50
Phase (deg)
10
0
10
1
Frequency (Hz)
0
0.5
1
Magnitude-Squared
Coherence
X: 10.08
Y: -15.63
X: 10.93
Y: -0.391
X: 3.742
Y: 0.578
Approved for public release; distribution is unlimited.
34
Figure 31 shows the time-domain signals comparing gyro, ocelli, and optic flow in
0.5 Hz, 1 Hz, 5 Hz, and 10 Hz windows, respectively. The outputs of optic flow
and ocelli are scaled to match gyro output (rad/s) at each window. The ocelli and
optic flow are in agreement with gyro signal at each frequency window.
Fig. 31 Time signals of gyro, ocelli, and optic flow in 0.5-, 1-, 5-, and 10-Hz windows; all
sensor outputs are scaled to match gyro (rad/s) at each window
Figure 32 shows the optic flow frequency characterization for 60 fps data, using
LucasKanade with 25-pixel window size and 16 feature points (4 × 4). The entire
magnitude response is very close to flat. There is almost no phase delay until 1 Hz,
at which point it reaches –35° at 10 Hz. Taking into account that these optic-flow
data are obtained with the highest frame rate, the magnitude response is expected
to degrade as the frame rate decreases. Although the optic flow frequency response
remains approximately flat with these settings, the frequency response is related to
the motion algorithm, frame rate, and window size, as will be seen next. Varying
these parameters, a worse high-frequency response will be obtained, which is the
real-life case with flying vehicle onboard computers. All in all, these data show the
maximum bandwidth the optic flow can achieve within the test limitations.
42 44 46
-1
0
1
gyro (rad/s)
0.5 Hz Window
42 44 46
-1
0
1
ocelli (rad/s)
42 44 46
time(s)
-1
0
1
optic flow (rad/s)
84 85
-1
0
1
1 Hz Window
84 85
-1
0
1
84 85
time(s)
-1
0
1
99.2 99.4
-1
0
1
5 Hz Window
99.2 99.4
-1
0
1
99.2 99.4
time(s)
-1
0
1
110.3 110.4
-1
0
1
10 Hz Window
110.3 110.4
-1
0
1
110.3 110.4
time(s)
-1
0
1
Approved for public release; distribution is unlimited.
35
Fig. 32 Frequency response of optic flow with respect to gyro as input: Overall magnitude
decrease is 1.42 dB. Phase delay reaches to –35° at 10 Hz.
3.10 Ocellar SensorGyro Voltage-Velocity Mapping
To understand the expected ocelli output (in volts) for a given gyro (rad/s) value,
the data for ocelli and gyro across 110 Hz test frequencies were combined. With
0.005 rad/s intervals, expected ocelli output (V) and gyro (rad/s) values were
calculated (16 data points for both). These points were fitted to a line of equation
()=1+2 (44)
where1 = 0.164 and 2 = 2.489, R2 = 0.9902. Figure 33 shows the mapping
plot. It shows that the ocelli output is monotonically increasing with increasing gyro
amplitude, implying linearity. The standard deviation (error bars) is large because
of the discretization limit of the ADC. The ADC has 4.8 mV resolution, and the
overall motion is within 140 mV.
Approved for public release; distribution is unlimited.
36
Fig. 33 Ocelligyro mapping plot shows the expected ocelli output (V) for a given gyro
measurement (rad/s). Ocelli output is monotonically increasing with increasing gyro values.
3.11 Performance-Related Parameters
3.11.1 Frame Rate
Frame rate is the frequency at which the camera displays consecutive images. From
Eq. 2, the LucasKanade algorithm extensively uses spatial and temporal
derivatives, using numerical differentiation. To give an example, let f be a given
function that is only known at a number of isolated points. The problem of
numerical differentiation is to compute an approximation of the derivative f’ of f
by suitable combinations of the known values of f.
Assuming that function f is differentiable, the derivative f’(a) for some real number,
a, is defined as
()=lim
()()
. (45)
For very small h, this derivative can be approximated by
()()()
. (46)
Approved for public release; distribution is unlimited.
37
This approximation involves error, and this error increases as h increases. To
demonstrate this, one can use f(x) = sinx, f’(x) = cosx and compute the error
between f’(x) = cosx and f’(x)~()()
. The error will increase as the h values
increase.
Figure 34 shows the frequency response of 60, 30, and 20 fps optic flow results
from gyro as input and from optic flow as output. As frame rate decreases, optic
flow magnitude response rolls off steeper and reaches to –2.64, –4.92, and –9.62
dB at 10 Hz for 60, 30, and 20 fps, respectively. For all frame rates, the phase delay
remains constant. For 0.1–1.1 Hz, there is almost no phase delay between optic
flow and gyroscope. After 1.1 through 10 Hz, the phase delay increases and reaches
–35°. The coherence of 20 fps measurement is the worst, as increasing frame rate
increases the coherence, as well.
Fig. 34 Optic flow frequency response with different frame rates, as seen by input gyro: As
the frame rate decreases, roll-off at higher frequencies is steeper. Higher frame rate results in
better coherence. Phase delay does not change due to frame rate.
Besides numerical differentiation error, another explanation lies under the Taylor
series approximation used to derive the motion constraint equation. Taking only the
first-order components assumes that the change in motion is small. However, when
the change is larger, the second-order components will come into play and the
motion constraint equation will no longer hold. When the motion is too fast for a
10
-1
10
0
10
1
Frequency (Hz)
-15
-10
-5
0
5
Magnitude (dB)
60 fps
30 fps
20 fps
10
-1
10
0
10
1
Frequency (Hz)
-100
-50
0
50
Phase (deg)
10
-1
10
0
10
1
Frequency (Hz)
0
0.5
1
Magnitude-Squared
Coherence
Approved for public release; distribution is unlimited.
38
given frame rate, the spatial/temporal estimate assumption breaks down. In
practice, this resulted in an optical flow measurement of erroneously low
magnitude.
Aliasing is another way to look at this roll-off. When the frame rate decreases,
there are less optic flow vectors to sample the given sine wave. These vectors may
be computed at random points of the sine wave, not exactly catching the peak
amplitudes. If the frame rate is higher, more optic flow points will result in a more
accurate sine wave, catching the peaks.
Scheider et al.55 study the optic flow outputs as the angular rate changes. According
to their findings for 2 different optic flow algorithms, optic flow matches with real
rate for slow motions for a specific resolution. As the rate increases, optic flow
cannot capture images often enough to get an accurate estimate of angular rate.
Optic flow first draws a unity line with real rate, then this line starts showing a fixed
rate; finally, it rolls completely off to zero.
To overcome this, one may increase the frame rate, or, use a smaller image (e.g.,
binned by 2) to double the frame rate. With the current setup, a 752 × 480-pixel
image can go up to 20 fps. When the image is binned by 2, the frame rate increases
to 60 fps for a 376 × 240-pixel image. This solution will result in losing maximum
image resolution.
3.11.2 Window Size
The LucasKanade algorithm assumes that the motion is the same for all pixels in
a window of w by w pixels. This tracking window size determines the number of
equations (hence, optic flow vector candidates) to be used in the least squares
method. Assuming constancy in motion, more optic flow vectors will give more
data points to determine the best fit for optic flow. However, if the window is too
large, a point may not move like its neighbors.
Figures 35 and 36 show the window size versus the optic flow frequency response.
Changing window sizes significantly affected the coherence plot. Phase delay
remained the same for all window sizes. Magnitude is the most erroneous for the
10- and 20-pixel (10 × 10, 20 × 20) window sizes; however, it stays relatively the
same for the rest. The coherence is the worst using 10 pixels. It improves as the
window size increases from 20 to 40 pixels, and remains the same after 40 pixels
through 70 pixels.
Approved for public release; distribution is unlimited.
39
Fig. 35 Optic flow frequency response with different window sizes (w = 10, 20, 30, 40), as
seen by input gyro: Very small windows (10 × 10 pixel) result in erroneous magnitude
response. Magnitude response and coherence improve as window size increases, phase delay
remains the same.
Fig. 36 Optic flow frequency response with different window sizes (w = 50, 60, 70), as seen
by input gyro: After 50 × 50-pixel window, magnitude, phase, and coherence plots do not
change.
10
-1
10
0
10
1
Frequency (Hz)
-4
-2
0
2
Magnitude (dB)
10
-1
10
0
10
1
Frequency (Hz)
-50
0
50
Phase (deg)
10
-1
10
0
10
1
Frequency (Hz)
0
0.5
1
Magnitude-Squared
Coherence
w=10*10px
w=20*20px
w=30*30px
w=40*40px
10 -1 10 010 1
Frequency (Hz)
-4
-2
0
2
Magnitude (dB)
w=50*50px
w=60*60px
w=70*70px
10 -1 10 010 1
Frequency (Hz)
-40
-20
0
20
Phase (deg)
10 -1 10 010 1
Frequency (Hz)
0
0.5
1
Magnitude-Squared
Coherence
Approved for public release; distribution is unlimited.
40
3.11.3 Feature Points
Feature points are the number of center pixels located on each image. Around these
center pixels, optic flow vectors are calculated within the window size. The number
of feature points determines the number of optic flow vectors computed. The
plotted optic flow is only the x component of average optic flow field. The feature
points are equally distributed over x and y dimensions of the image. The spacing
between them is
()=()
() (47)
where x feature points are referred to as x-by-x center pixels. Figures 37 and 38
show the image scene with 10 × 10 and 4 × 4 feature points, respectively.
Fig. 37 Camera scene (10 × 10 feature points)
Fig. 38 Camera scene (4 × 4 feature points)
Approved for public release; distribution is unlimited.
41
Figure 39 shows the optic flow frequency response with respect to different number
of future points. The 2 × 2 feature points result in the most erroneous optic flow
magnitude. The magnitude response improves after 2 × 2 and stays relatively the
same from 4 × 4 to 15 × 15 feature points. Similar to window size result, the change
in phase remains the same between feature points. The coherence is the worst using
2 × 2 feature points. Increasing the feature points improves the coherence; however,
coherence remains the same after 8 feature points. This means that a sufficient
number of optic flow vector data points are accumulated to make the best fit for
optic flow with 8 × 8 feature points. More feature points bring redundant data
points.
Fig. 39 Optic flow frequency response with different number of feature points (f), as seen by
input gyro: 2 × 2 feature points result in erroneous magnitude plot. As the feature points
increase, magnitude and phase plots do not show much change; however, coherence improves.
3.11.4 Luminance Intensity
DC light input power is varied to understand how the luminance intensity changes
the ocellar circuit output. The circuit outputs at the same frequency were compared.
Figure 40 shows the peak-to-peak amplitudes with respect to input power, for 10
Hz motion. As the power increases, amplitude increases, as expected. The fitted
line has coefficients of p1 = 0.088, p2 = 0.006. This brings a necessity for adaptive
gain” for different luminance values in the environment.
10
-1
10
0
10
1
Frequency (Hz)
-4
-2
0
2
4
Magnitude (dB)
10
-1
10
0
10
1
Frequency (Hz)
-50
0
50
Phase (deg)
f=2*2
f=4*4
f=6*6
f=8*8
f=10*10
f=15*15
10
-1
10
0
10
1
Frequency (Hz)
0
0.5
1
Magnitude-Squared
Coherence
Approved for public release; distribution is unlimited.
42
Fig. 40 Light source input power vs. ocelli peak-to-peak amplitude: Luminance increase
linearly increases the peak-to-peak amplitude. DC light source is specified in Table 3.
3.11.5 Photodiode Bending
The photodiodes should be bent towards the light source to share an intersecting
FOV and to satisfy that one’s output is increasing while the other’s is decreasing.
The bending determines how much their FOVs intersect and how much they are
seeing the edges of the light source as 2 different horizons. The reference for
bending is seen in Fig. 41. If β = , no common luminance is shared. If β = 90°,
their FOVs completely intersect and no symmetric change with respect to each
other is observed. Assumption is satisfied for β values between 0 and 45°,
specifically β = 30°, 40°, 45° raw outputs are observed to be symmetric to each
other. For β ~ 90°, the same sine wave shape is seen at the same instant.
1.5 2 2.5 3 3.5 4 4.5
Light source input power (W)
0.15
0.2
0.25
0.3
0.35
0.4
Ocelli peak-to-peak amplitude at 10 Hz (V)
Data Points
f(x) = p1*x + p2
R-square: 0.9994
Approved for public release; distribution is unlimited.
43
Fig. 41 Bending illustration: The photodiodes should share an intersecting FOV toward the
light source for the sensor to operate. Bending values 30

< β < 45
were observed to give
symmetric photodiode outputs. Β = 90
completely overlaps the FOVs, without distinct
horizons for each photodiode.
3.12 Test Setup Limitations
The maximum motion frequency achieved with the motor is around 10 Hz for
velocity mode. For position mode, the frequency is even lower, at 2 Hz. Above
these frequencies for related modes, the motor does not follow the input
position/velocity. The higher frequency components in the plots are from inherent
mechanical vibrations of the motor and the flickering of laboratory lights at 60 Hz
or its harmonics at 120180Hz. The camera used has a theoretical claim of 87 fps
frame rate. However, when this frame rate is used, frame drops are observed. Optic
flow calculation is highly corrupted by frame drops. Frame drops were minimized
with 60 fps frame rate.
Also, using 2 cameras for covering more field was our first attempt. This
configuration needs triggering to satisfy that the cameras are taking photo at the
same time. Triggering was achieved with 2 PX-4 Inertial Measurement Units
working as master and slave. However, data-transfer limitation from USB port
reintroduced the lower frame rate problem; hence, frame drops stopped after
switching to one camera at the same frame rate. To allow for triggering, frame rate
had to be decreased to 20 fps, which limited the optic flow bandwidth. To show the
maximum achievable bandwidth for optic flow with this configuration, only one
camera is used with 60 fps, 376 × 240 pixels image.
Approved for public release; distribution is unlimited.
44
4. Sensor Fusion
As described in Section 3, ocellar sensor shows a relatively stable magnitude across
the test frequencies. Optic flow frequency response can keep up with the ocellar
sensor for 60 fps data. As the frame rate decreases, optic flow magnitude plot rolls
off. At high motion frequencies (where the optic flow information degrades), it is
possible to use the ocellar circuit. This section first presents the biological
background for sensor fusion in insect compound eyes and the ocelli. Then the
general fusion approaches from the literature are discussed. Finally, the optic flow
and the ocellar sensor data are fused to demonstrate the high-frequency roll-off
compensation of optic flow, using the ocellar sensor.
4.1 Biological Background for Sensor Fusion
From the behavioral studies, ocelli and compound eyes are thought to work together
for flight stabilization abilities.21,22,24 In the blowfly, it is previously studied that
lobula plate tangential cells estimate the self-motion by taking local motion
information from compound eyes. One of the cells that are reported to respond optic
flow information is a tangential cell, called V1.56 Parsons et al.57 reports that V1
responds to ocelli stimulus as well. The response increases with the rate at which
the light intensity changes, implying that V1 might be encoding angular velocity
information, as well as optic flow information. Haag et al.58 experiments that a
prominent descending neuron called DNOVS1 receive input from 2 sources—from
the photoreceptors of compound eye via large-field motion sensitive cells and from
photoreceptors of ocelli via ocellar interneurons. Parsons et al.59 reports that lobula
plate neurons combine inputs from both ocelli and compound eyes. Ocellar
responses encode information in 3 axes, whereas compound eyes encode in 9. This
reveals that ocelli are only able to detect rotation around 3 axes, thus offering less
specificity with respect to compound eye. If we assume a direct summation of ocelli
and compound eye neuronal signals, this might help the flight behavior in 3 axes
(since there will be more information for 3 axes, from both compound eyes and
ocelli). However, for the other 6 axes, ocelli might output zeroand the fused
response from both compound eye and ocelli might degrade the flight behavior,
which seems like counterintuitive. Parsons et al.59 suggest that each VS neuron is
tuned to the ocellar axis closest to its compound eye axis, combining the speed of
ocelli with the accuracy of compound eyes without compromising either.
Having said that ocelli are faster than compound eyes, what is the quantitative
difference between these latencies? The response latency depends strongly on
experimental parameters, such as contrast and frequency of a moving stimulus. For
example, with increasing contrast and high frequency, latency decreases.
Approved for public release; distribution is unlimited.
45
Moreover, temperature changes and the age of the fly affect the latency.60 Safran et
al.61 report that motion sensitive neuron H1 (compound eye neuron) transmits
signals in 20–30 ms. Parsons et al.59 measures 6 ms for ocellar latency, which
indicates a significant reduction when compared to compound eyes. For high
frequency disturbances, low-latency ocellar neurons will be needed.
4.2 Fusion Approaches
In motion detection and control systems, especially in flight control and inertial
navigation, different kinds of sensors are used on one platform. When measuring a
particular variable, a single type of sensor may not be able to meet all the required
performance specifications. For example, both accelerometer and gyroscope data
can be used to compute angles. Since an accelerometer gives acceleration, angles
can be reconstructed from accelerometer output by 2-fold integration. Similarly, a
gyroscope gives velocity information and one integration would be enough. The
accelerometer is known to be good for “long term”, meaning that it does not drift.
A gyroscope is good for “short term”; it is known to have poor drift characteristics
but is able to give a fast response. An ideal combination would be a fast transient
response with no drift, by combining good qualities from 2 measurements.
Theoretically, if a time-varying signal is applied to both a low-pass and high-pass
filter with unity gain, the sum of the filtered signals should be identical to the input
signal. (See Fig. 42.) Assume that x and y are noisy measurements of some signal
z, with x employing low-frequency noise and y employing high-frequency noise.
Z’ is the estimate of the signal z produced by the complementary filter.
Fig. 42 Illustration of complementary filter
Practically, complementary and Kalman filters provide the fusion of 2 signals. The
Kalman filter, working in time domain, needs statistical description of the noise
corrupting the signals. This noise is assumed to be Gaussian white noise.
Approved for public release; distribution is unlimited.
46
Complementary filters approach the problem from the frequency domain, and they
are generally used for the fusion systems that do not deal with noise. For digital
implementation, the complementary filter has considerable advantage over the
Kalman filter, as Kalman gains are not computed for each state. Therefore, after
determining the filter coefficients for complementary filter, the update rate of
complementary filter can be higher than Kalman filter for each loop. This is an
important consideration for the applications in which high-rate loop closure is
necessary.
4.3 Previous Sensor Fusion Implementations
Sensor fusion is governed by complementary and Kalman filtering in the literature,
generally for virtual reality applications in computer vision and attitude control.
Vision-based information helps avoiding the errors resulting from integrating the
inertial sensors over time. Vaganay et al.62 fuse 2 accelerometers and 3 gyroscopes
for an indoor mobile robot to obtain attitude information using an extended Kalman
filter. Foxlin63 fuses gyroscopes and inclinometer for head-tracking using a Kalman
filter. You and Neumann64 integrate high-frequency stable gyroscope and low-
frequency stable vision-based tracking using a Kalman filter for an augmented
reality. Wu et al.65 use an extended Kalman filter that takes information from
camera images, inertial measurement unit, and magnetometers to estimate the pose
of the vehicle. Cheviron et al.66 fuse accelerometer, gyroscope, and vision sensors
to obtain position, velocity, and attitude information for an unmanned aerial vehicle
(UAV), using a nonlinear complementary filter framework. Bleser and Stricker67
use an extended Kalman filter to fuse vision-based output for slow movements and
inertial sensor output for fast movements for virtual reality applications. Conte and
Doherty68 use a Kalman filter to fuse data from 3 accelerometers and 3 gyroscopes
with a position sensor for UAV navigation. Position sensor input is either from a
global positioning system (GPS), when GPS is available, or from vision system
(feature tracking) when GPS is not available. Schall et al.69 fuse GPS data, inertial
sensor data, and camera image data for global pose information for augmented
reality. Inertial and GPS data are fused using a Kalman filter. Achtelik and Weiss70
use an extended Kalman filter to fuse air pressure sensor and vision framework
(computationally expensive) with inertial sensor data to handle the fast movements
and disturbances of the micro-air vehicle. Campolo et al.71 propose a
complementary filter to fuse magnetometer, accelerometer, and gyroscope data for
attitude estimation. In this section, the time domain signal of both ocellar and optic
flow outputs are combined to extend the optic flow frequency response.
Approved for public release; distribution is unlimited.
47
4.4 Ocellar Sensor-Optic Flow Fusion Approach
Previous results show that optic flow shows roll-off at high frequency motion. On
the other hand, the ocellar sensor shows a relatively flat response at high
frequencies. This experiment uses a camera capable of 87 fps, in theory, and a
high-speed Ubuntu laptop. Even with this configuration, the real frame rate
obtained from the camera becomes 60 fps because of data transfer limitations of
USB busses. The frame rates higher than 60 fps result in dropped frames and
corrupt the optic flow output.
Commercially available single-board computers (e.g., Raspberry Pi71) allow for
lower frame rates. Practically, Raspberry Pi 2 is limited to 1520 fps for the same
LucasKanade algorithm used in this experiment. A more expensive model, Odroid
XU4,72 is capable of 60 fps; however, its cost doubles Raspberry Pi ($75 vs. $35).
A relatively cheap single-board computer will have a limited optic flow
computation bandwidth. On the other hand, the ocellar sensor offers a fast, cheap,
and low-power alternative to optic flow computation. It has a relatively flat
magnitude and phase response, and it is an attractive alternative for rotational
motion. However, its performance is highly dependent on the luminance. It assumes
a constant luminance gradient from sky to ground. Optic flow computation does
not have such an assumption, and it only needs a texture around it. Moreover, this
setup uses a 180°-FOV lens to obtain wide-field motion. To have more FOV, the
number of cameras may be increased; this will, however, create the necessity for
simultaneous triggering of the 2 cameras. When this setup is used with 2 cameras,
the triggering reduces the camera frame rates down to 20 fps. This reduction is
expected to be more using a cheaper configuration. Lower frame rate will introduce
a lower optic flow bandwidth, making the optic flow sensing incapable of
performing at high frequencies. To compensate for the high-frequency roll-off of
optic flow, ocellar sensor data are fused with optic flow. Figure 43 shows the
complementary fusion. The optic-flow data are low-pass filtered with a fourth-
order Butterworth filter. The inverse of this filter, a fourth-order Butterworth high-
pass filter, with the same cutoff frequency is used to high-pass filter the ocelli data.
The reason for using a fourth-order Butterworth filter instead of single-pole high-
and low-pass filters is that it resulted in a better coherence. Single-pole filter
combinations decreased the coherence values at high frequencies. The fusion
operation increased the bandwidth and decreased the phase delay of optic flow.
Approved for public release; distribution is unlimited.
48
Fig. 43 Frequency response ocelli, optic flow, and their complementary fusion: Fourth-
order Butterworth filter was used to high-pass ocelli and low-pass optic flow. The normalized
cutoff frequency had to be kept very small to make use of ocelli’s relatively flat magnitude and
less-delayed phase. Fused response shows coherence is better than optic flows.
An even more direct way is taking the weighted average of optic flow and ocelli.
While this approach will not provide fully low-pass filtered optic flow and high-
pass filtered ocelli, if the ocelli weight is kept high, the result will be very similar
to optic flow. Figure 44 shows another fusion that implements
+(1) (48)
where a = 0.9. Magnitude, phase, and coherence plots result in-between ocelli and
optic flow, very close to ocelli.
Approved for public release; distribution is unlimited.
49
Fig. 44 Frequency response ocelli, optic flow, and their weighted-average fusion: Ocelli and
optic flow time-domain signals are combined to obtain a result close to ocelli.
However, complementing both ocelli and optic flow readings gives a result close
to ocelli. It assumes that the ambient luminance distribution is as calibrated in this
experiment. We know that the ocelli magnitude increases with increasing
luminance. This peak-to-peak amplitude is a linear function of input light source
power, as seen in Section 3. Additionally, ocellar sensor has to be in a valid range”.
All in all, optic flow is immune to luminance intensity. It gives a flat magnitude
response at low frequencies. Ocelli, however, is vulnerable to luminance intensity
and it does not show a roll-off in magnitude as optic flow shows. It would be ideal
to combine the good properties of both measurements real-time. Ocelli magnitude
plot with respect to increasing luminance intensity should result in a plot like Fig.
45. Optic flow magnitude plot shows low-pass characteristics.
Approved for public release; distribution is unlimited.
50
Fig. 45 Magnitude response of ocelli with different luminance values and optic flow at 30
fps: Increasing luminance implies higher magnitude for ocelli (L1 < L2 < L3 < L4 < L5). Ambient
luminance change brings adaptive gain necessity. Upper figure is the magnitude-scaled
versions of ocelli response, not derived from real luminance values.
A mechanism that allows for switching from one mode to another is desired to
decide which sensor to use. This switching mechanism may be a gyroscope. The
gain adjustment may be done with a feedback from ocelli output that is
continuously compared with gyroscope/optic flow output. If a valid region for
ocelli is found, ocelli is preferred over optic flow due to its high speed. A
hypothetical iterative approach is shown in Fig. 46. First, ocelli gain is adjusted
with the use of a lookup table and the error between ocelli and gyro is computed. If
this error is below a threshold, the gain adjustment is satisfied. After this, the
validity of ocelli output is confirmed by computing the error between the gyro and
optic flow. If these comparisons allow, ocelli is preferred to be used for closed-loop
rate stabilization. If not, either gyro or optic flow is used. Ocelli gain can be adjusted
with digital potentiometers and an operational amplifier. The digital potentiometers
are controlled from microcontroller. For a gain less than 1, a voltage divider reduces
the ocelli output. For a gain greater than 1, a noninverting amplifier increases the
ocelli output. This output is continuously fed back to the microcontroller to
compare the gyroscope and ocelli error to find a new gain value from the lookup
Approved for public release; distribution is unlimited.
51
table and adjust the potentiometers accordingly. Figure 47 shows the possible
circuit configuration with microcontroller.
Fig. 46 Hypothetical sensor decision approach: Adjust ocelli gain by continuously
computing error between gyro/OF and ocelli; check if ocelli is valid to use by comparing
gyro/OF; use ocelli if comparisons allow.
Approved for public release; distribution is unlimited.
52
Fig. 47 Hypothetical ocelli gain adjustment approach: Gains > 1 are tuned by noninverting
op-amp. Gains < 1 are tuned by voltage divider. The tuned outputs are compared with lookup
table and microcontrollers iteratively tune the digital potentiometers until error threshold is
low enough.
5. Conclusion and Future Work
5.1 Conclusion
Frequency-domain characterization of optic flow and ocellar sensors are presented.
The advantages and disadvantages for both sensing mechanisms are discussed. In
summary:
Ocellar sensor shows a relatively flat magnitude response and less phase
delay than optic flow.
Ocellar sensor is attractive for high-rate loop closure since it is cheaper and
faster than high-quality cameras.
The displacement dynamic range of the ocellar sensor is observed to be 1
radian with this setup, due to the small size of the light source. Using a
larger light source, higher displacements may be achieved.
The frequency dynamic range of ocellar sensor is observed to be up to
10 Hz with motion and up to low-frequency cutoff without motion. Ten
Approved for public release; distribution is unlimited.
53
hertz is a limitation from mechanical test setup; higher motion frequencies
are expected due to the circuit simulation and LED experiment results. For
outdoor experiments, the low-frequency cutoff of the band-pass circuit can
be eliminated since there is no flickering issue outdoors.
Ocellar sensor magnitude shows a linear relationship with luminance
intensity. Since it is highly luminance-dependent, an adaptive gain
calibration is necessary for usage with different luminance levels.
Ocellar sensor shows monotonic increase with increasing gyro values.
Optic flow magnitude rolls off at high frequencies. Specifically, 60 fps can
keep up with ocelli response. Thirty and 20 fps show roll-off at 7 Hz. Less
frame rate shows steeper roll-off. Phase delay increases with increasing
frequency. All frame rates tested show the same phase delay across all
frequencies.
Optic flow algorithm parameters (feature points, window size) affect the
coherence. No significant change in magnitude and phase plots is observed,
except for erroneous magnitudes for extremely small window sizes or
feature points.
5.2 Future Work
Several potential directions may be taken to extend the work of this report. Taking
the characterization results, performance parameters, and hypothetical sensor
fusion suggestions into account, a closed-loop optic flow and ocellar-based fusion
may be implemented to perform real-time stabilization and disturbance rejection.
Multiple ocellar sensors with lenses may be placed in an array-like fashion on a
flying vehicle to extend the current FOV of the ocellar sensor. The outputs of
ocellar sensor may be matched with predefined patterns to inform where exactly
the disturbance occurs.
The combination of optic flow computations and ocellar sensor gives both slow and
fast alternatives for horizon detection and angular-rate sensing.
The coherence in ocellar sensor and optic flow frequency response plots show dips
at specific frequencies. The reason for these dips could not be identified during the
experiments. If these dips were caused form the motor mechanical noise, both the
gyroscope and the ocellar sensor should be able to pick the mechanical noise up,
resulting in the same motion for both of them. Also, the motor resonance and
gyroscope resonance possibilities have been eliminated after confirming the time-
domain signals with the ground truth. It is presumed that the ocellar sensor may be
Approved for public release; distribution is unlimited.
54
slightly modulating the input sine wave at these frequencies. The experiments can
be repeated by using another ocellar sensor board and/or another motor.
To compensate for the optic flow’s slow rate, another direction might be converting
the optic flow-ocelli system to a fully analog scheme. Combining both sensors in
analog domain might give the complementary approach in a compact, fast, and
lightweight way. While digital optic flow computation has the freedom of easy
adaptation and reconfiguration with different, sophisticated, and robust algorithms,
subthreshold analog very-large-scale integration (VLSI) optic flow designs are
much smaller, lightweight, low power, and faster. One may argue that the
one-board computers are already lightweight. However, decrease in size and weight
are extremely important factors for micro aerial vehicle design. VLSI allows the
photodiodes and computation circuitry to be fabricated on a piece of silicon;
therefore, it is very suitable for vision-based sensing.
Insect ocelli have high responsivity to ultraviolet wavelengths. A completely
different direction might be taking the ocellar sensor outside, using the sky–ground
discrimination in ultraviolet wavelengths. The wideband photodiodes in current
circuit can be replaced with ultraviolet photodiodes. A detail to consider is the
ultraviolet intensity difference in sky and ground in different times of the day and
different weather conditions. On a sunny day, the results show that sky is brighter
than ground in ultraviolet. On a cloudy day, it can be the opposite. While the
constant and negative luminance gradient may not be satisfied for all cases, specific
weather conditions (e.g., sunny day, no clouds) can allow for outdoor use. We have
built the ultraviolet version of the ocellar sensor and tested outside. A main problem
is the uneven ultraviolet intensity coming to both ultraviolet photodiodes. On
cloudy days, there is nearly no ultraviolet difference between sky and the ground;
it is thought that the clouds are blocking the ultraviolet portion in the sunlight. On
sunny days, one photodiode should not see the high intensity created by the sun and
should only have the portion coming from the sky. The instantaneous displacement
of the clouds and the wind are also factors that create the uneven ultraviolet
intensity on both photodiodes. While these cases make it hard to test outside, we
have seen with some datasets that it agrees with gyroscope output.
Approved for public release; distribution is unlimited.
55
6. References
1. Barth FG, Humphrey J, Srinivasan M. Frontiers in sensing: from biology to
engineering. Wien (Austria): Springer Business & Media; 2012 [accessed
2016 May 6]. p. 4. E-book URL:
http://www.springer.com/us/book/9783211997482.
2. Mazokhin GA. Insect vision. New York (NY): Plenum Press; 1969.
3. Horridge GA. Compound eye and vision of insects. New York (NY): Oxford
University Press; 1975.
4. Heisenberg M, Reinhard W. Vision in Drosophila: genetics of
microbehavior. Berlin (Germany): Springer-Verlag; 1984. pp.82–94
5. Parsons M, Krapp HG, Laughlin SB. Sensor fusion in identified visual
interneurons. Current Biology. 2010;20(7): London, UK.
6. Safran MN, Flanagin V, Borst A, Sompolinsky H. Adaptation and
information transmission in fly motion detection. Journal of
Neurophysiology. 2007;98(6): Jerusalem, Israel.
7. Joesch M, Schnell B, Shamprasad VR, Reiff DF, Borst A. On and off
pathways in Drosophila motion vision. Nature. 2010:468(7321): Martinsried,
Germany.
8. Borst A. Fly visual course control: behavior, algorithms and circuits. Nature
Reviews Neuroscience. 2014;15(9): Martinsried, Germany.
9. Wallace GK. Visual scanning in the desert locust Schistocerca gregaria
Forskal. Journal of Experimental Biology. 1959;36(3): Reading, UK.
10. Silies M, Gohl D, Clandinin TR. Motion-detecting circuits in flies: Coming
into view. Annual Review of Neuroscience. 2014;37(307–27): Stanford, CA.
11. Kral K, Michael P. Motion parallax as a source of distance information in
locusts and mantids. Journal of Insect Behavior. 1997;10(1): Graz, Austria.
12. Sobel EC. The locust’s use of motion parallax to measure distance. Journal of
Comparative Physiology A. 1990;167(5): Philadelphia, PA.
13. Schnell B, Joesch M, Forstner F, Raghu SV, Otsuna H, Ito K, Borst A, Reiff
DF. Processing of horizontal optic flow in three visual interneurons of the
Drosophila brain. Journal of Neurophysiology. 2010;103(3): Martinsried,
Germany.
Approved for public release; distribution is unlimited.
56
14. Haag J, Denk W, Borst A. Fly motion vision is based on Reichardt detectors
regardless of the signal-to-noise ratio. Proceedings of the National Academy
of Sciences of the United States of America. 2004;101(46): Heidelberg,
Germany.
15. Frye M. Elementary motion detectors. Current Biology. 2015;25(6): Los
Angeles, CA.
16. Ocellus cross section. San Francisco (CA): Wikimedia; 2011 [accessed 2016
May 6].
https://commons.wikimedia.org/wiki/File:Insect_ocellus_diagram.svg
17. Stange G, Stowe S, Chahl JS, Massaro A. Anisotropic imaging in the
dragonfly median ocellus: a matched filter for horizon detection. Journal of
Comparative Physiology. 2002;A188(6): Canberra, Australia.
18. Wilson M. The functional organisation of locust ocelli. Journal of
comparative physiology. 1978;124(4): Canberra, Australia.
19. Goodman LJ. Organisation and physiology of the insect dorsal ocellar
system. Handbook of Sensory Physiology. New York (NY): Springer; 1981.
20. Kastberger G, Schumann K. Ocellar occlusion effect on the flight behavior of
homing honeybees. Journal of Insect Physiology. 1993;39(7): Graz, Austria.
21. Schricker B. Die Orientierung der Honigbiene in der Dämmerung. Zeitschrift
für vergleichende Physiologie. 1965;49(5): Munich, Germany. Summary in
English [accessed 2016 May 6]:
http://link.springer.com/article/10.1007/BF00298112#Abs2
22. Stange G, Howard J. An ocellar dorsal light response in a dragonfly. Journal
of Experimental Biology. 1979;83: Canberra, Australia.
23. Stange G. The ocellar component of flight equilibrium control in dragonflies.
Journal of Comparative Physiology.1981;141(3): Canberra, Australia.
24. Taylor CP. Contribution of compound eyes and ocelli to steering of locusts in
flight: I. Behavioural analysis. Journal of Experimental Biology. 1981;93(1):
Berkeley, CA.
25. Schuppe H, Hengstenberg R. Optical properties of the ocelli of Calliphora
erythrocephala and their role in the dorsal light response. Journal of
Comparative Physiology A. 1993;173(2): Tubingen, Germany.
Approved for public release; distribution is unlimited.
57
26. Rence BG, Lisy MT, Garves BR, Quinlan BJ. The role of ocelli in circadian
singing rhythms of crickets. Physiological Entomology. 1988;13(2):
Appleton, WI.
27. Sprint MM, Eaton JL. Flight behavior of normal and anocellate cabbage
loopers. Annals of the Entomological Society of America. 1987;80(4):
Virginia, VA.
28. Eaton JL, Tignor KR, Holtzman GI. Role of moth ocelli in timing flight
initiation at dusk. Physiological Entomology. 1983;8(4): Virginia, VA.
29. Wunderer H, Jacobus JDK. Dorsal ocelli and light-induced diurnal activity
patterns in the arctiid moth Creatonotos transiens. Journal of Insect
Physiology. 1989; 35(2): Regensburg, Germany.
30. Land MF, Nilsson DE. Animal eyes. 2nd ed. New York (NY): Oxford
University Press; 2002.
31. Kirschfeld K. The resolution of lens and compound eyes. In: Neural
principles in Vision. Berlin (Germany): Springer; 1976. p. 354–370.
32. Krapp HG, Hengstenberg R. Estimation of self-motion by optic flow
processing in single visual interneurons. Nature. 1996;384(6608): Tubingen,
Germany.
33. Barrows GL, Chahl JS, Srinivasan MV. Biomimetic visual sensing and flight
control. Proceedings of Bristol UAV Conference, 2002; Bristol, UK.
34. Neumann TR, Bülthoff HH. Insect inspired visual control of translatory
flight. The 6th European Conference on Artificial Life; 2001; Czech
Republic.
35. Neumann TR, Bülthoff HH. Behavior-oriented vision for biomimetic flight
control. Proceedings of the EPSRC/BBSRC International Workshop on
Biologically Inspired Robotics; 2002; Bristol, UK.
36. Wu WC, Schenato L, Wood RJ, Fearing RS. Biomimetic sensor suite for
flight control of a micromechanical flying insect: design and experimental
results. IEEE International Conference on Robotics and Automation; 2003;
Taipei, Taiwan.
37. Schenato L, Wu WC, Sastry S. Attitude control for a micromechanical flying
insect via sensor output feedback. IEEE Transactions on Robotics and
Automation. 2004;20(1): Berkeley, CA.
Approved for public release; distribution is unlimited.
58
38. Javaan C, Thakoor S, Le Bouffant N, Stange G, Srinivasan MV, Hine B,
Zornetzer S. Bioinspired engineering of exploration systems: a horizon
sensor/attitude reference system based on the dragonfly ocelli for mars
exploration applications. Journal of Robotic Systems. 2003;20(1): Canberra,
Australia.
39. Kerhuel L, Viollet S, Franceschini N. Steering by gazing: An efficient
biomimetic control strategy for visually guided micro aerial vehicles. IEEE
Transactions on Robotics. 2010;26(2): Paris, France.
40. Moore RJD, Thurrowgood S, Bland D. A fast and adaptive method for
estimating UAV attitude from the visual horizon. IEEE/RSJ International
Conference on Intelligent Robots and Systems; 2011; San Francisco, CA.
41. Javaan C, Akiko M. Biomimetic attitude and orientation sensors. IEEE Sensors
Journal. 2012;12(2): Edinburgh, Australia.
42. Gremillion G, Humbert JS, Krapp HG. Bio-inspired modeling and
implementation of the ocelli visual system of flying insects. Biological
Cybernetics. 2014;108(6): College Park, MD.
43. Menzel R. Spectral sensitivity and color vision in invertebrates. In:
Comparative physiology and evolution of vision in invertebrates. Berlin
(Germany): Springer Berlin Heidelberg; 1979. p. 503–580.
44. Fahle M, Poggio T. Visual hyperacuity: spatiotemporal interpolation in
human vision. Proceedings of the Royal Society of London B: Biological
Sciences. 1981;213(1193): Tubingen, Germany.
45. Adelson EH, Bergen JR. Spatiotemporal energy models for the perception of
motion. The Journal of the Optical Society of America. 1985;2(2): Princeton,
NJ.
46. Herault J. Vision: Images, Signals and Neural Networks, Progress in Neural
Processing. Singapore: World Scientific, 2010. p.126.
47. Barron JL, Thacker NA. Tutorial: Computing 2D and 3D optical flow.
Manchester (UK): University of Manchester; 2005 [accessed 2016 May 6].
http://tina.wiau.man.ac.uk/docs/memos/2004-012.pdf.
48. Lucas BD, Kanade T. An iterative image registration technique with an
application to stereo vision. Proceedings on the 7th International Conference
on Artificial Intelligence; 1981; Vancouver, Canada. p. 674–679.
49. Horn BK, Schunck BG. Determining optical flow. Technical Symposium
East. International Society for Optics and Photonics, 1981.
Approved for public release; distribution is unlimited.
59
50. Stanford mobile computer vision lecture. Stanford (CA): Stanford University;
2015 Apr 20 [accessed 2016 May 6].
http://web.stanford.edu/class/cs231m/lectures/lecture-7-optical-flow.pdf.
51. Bouguet JY. Pyramidal implementation of the affine Lucas-Kanade feature
tracker description of the algorithm. Santa Clara (CA): Intel Corporation;
2001 [accessed 2016 May 6].
http://robots.stanford.edu/cs223b04/algo_affine_tracking.pdf.
52. Farnebäck G. Two-frame motion estimation based on polynomial expansion.
Scandinavian Conference on Image Analysis; 2003; Halmstad, Sweden; p.
363–370.
53. Smartmotor developer’s guide. Mountainview (CA): Moog Animatics,
[accessed 2016 May 6]. http://www.animatics.com/support/download-
center.html.
54. Hori M, Shibuya K, Sato M, Saito Y. Lethal effects of short-wavelength
visible light on insects. Scientific Reports. 2014;4(7383): Sendai, Japan.
55. Scheider K, Conroy J, Nothwag W. Computing optic flow with ArduEye
vision sensor. Adelphi (MD): Army Research Laboratory (US); 2013 Jan.
Report No.: ARL-TR-6292.
56. Krapp HG, Hengstenberg R., Egelhaaf, M. Binocular contributions to optic
flow processing in the fly visual system. Journal of Neurophysiology.
2001;85(2): Beilefeld, Germany.
57. Parsons M, Krapp HG, Laughlin SB. A motion-sensitive neuron responds to
signals from the two visual systems of the blowfly, the compound eyes and
ocelli. Journal of Experimental Biology. 2006;209(22): Cambridge, UK.
58. Haag J, Wertz A, Borst A. Integration of lobula plate output signals by
DNOVS1, an identified premotor descending neuron. The Journal of
Neuroscience. 2007;27.8: Martinsried, Germany.
59. Parsons M, Krapp HG, Laughlin SB. Sensor fusion in identified visual
interneurons. Current Biology. 2010;20(7): London, UK.
60. Warzecha AK, Egelhaaf, M. Response latency of a motion-sensitive neuron
in the fly visual system: dependence on stimulus parameters and
physiological conditions. Vision Research. 2000;40(21): Beilefeld, Germany.
61. Safran MN, Flanagin V, Borst A, Sompolinsky H. Adaptation and
information transmission in fly motion detection. Journal of
Neurophysiology. 2007;98(6): Jerusalem, Israel.
Approved for public release; distribution is unlimited.
60
62. Vaganay J, Aldon MJ, Fournier A. Mobile robot attitude estimation by fusion
of inertial data. IEEE International Conference on Robotics and Automation;
1993; Atlanta, GA.
63. Foxlin E. Inertial head-tracker sensor fusion by a complementary separate-
bias Kalman filter. Proceedings of the IEEE Virtual Reality Annual
International Symposium; 1996.
64. You S, Neumann U. Fusion of vision and gyro tracking for robust augmented
reality registration. Proceedings of IEEE Virtual Reality; 2001.
65. Wu AD, Johnson EN, Proctor AA. Vision-aided inertial navigation for flight
control. Journal of Aerospace Computing, Information, and Communication.
2005;2.9: Atlanta, GA.
66. Cheviron T, Hamel T, Mahony R, Baldwin G. Robust nonlinear fusion of
inertial and visual data for position, velocity and attitude estimation of UAV.
IEEE International Conference on Robotics and Automation; 2007.
67. Bleser G, Stricker D. Advanced tracking through efficient image processing
and visual–inertial sensor fusion. IEEE Visual Reality; 2008; Reno, NV.
68. Conte, G, Doherty P. Vision-based unmanned aerial vehicle navigation using
geo-referenced information. EURASIP Journal on Advances in Signal
Processing. 2009;2009(387308): Linkopig, Sweden.
69. Schall G, Wagner D, Reitmayr G, Taichmann E, Wieser M, Schmalstieg D,
Hofmann-Wellenhof B. Global pose estimation using multi-sensor fusion for
outdoor augmented reality. 8th IEEE International Symposium on Mixed and
Augmented Reality; 2009.
70. Achtelik M, Weiss S. Onboard IMU and monocular vision based control for
MAVs in unknown in-and outdoor environments. IEEE International
Conference on Robotics and Automation; 2011.
71. Campolo D, Schenato L, Pi L, Deng X, Guglielmelli E. Attitude estimation
of a biologically inspired robotic housefly via multimodal sensor fusion.
Advanced Robotics. 2009;23(2009): Rome, Italy.
72. Raspberry Pi model summaries. San Francisco (CA): Wikipedia [accessed
2016 May 6]. https://en.wikipedia.org/wiki/Raspberry_Pi.
73. Odroid XU4 summary. GyeongGi (South Korea): Odroid [accessed 2016
May 6].
http://www.hardkernel.com/main/products/prdt_info.php?g_code=G1434522
39825.
Approved for public release; distribution is unlimited.
61
List of Symbols, Abbreviations, and Acronyms
2-D 2-dimensional
AC alternating current
ADC analog-to-digital converter
DC direct current
EMD elementary motion detector
FOV field of view
fps frames per second
GPS global positioning system
LED light-emitting diode
LSB least significant bytes
MCU Microcontroller Unit
MSB most significant bytes
SPI serial peripheral interface
sUAS small unmanned aircraft systems
UAV unmanned aerial vehicle
VLSI very-large-scale integration
Approved for public release; distribution is unlimited.
62
1 DEFENSE TECHNICAL
(PDF) INFORMATION CTR
DTIC OCA
2 DIRECTOR
(PDF) US ARMY RESEARCH LAB
RDRL CIO L
IMAL HRA MAIL & RECORDS
MGMT
1 GOVT PRINTG OFC
(PDF) A MALHOTRA
1 US ARMY RESEARCH LAB
(PDF) RDRL SER L
JOSEPH K CONROY
1 UNIV OF MD OR GA TECH
(PDF) NIL GUREL
1 UNIV OF MD
(PDF) TIMOTHY HORIUCHI
1 UNIV OF COLORADO
(PDF) SEAN HUMBERT
ResearchGate has not been able to resolve any citations for this publication.
Chapter
Full-text available
Two distinctly different types of eyes have been highly developed in evolution: lens eyes (= camera eyes) in vertebrates, some molluscs and arachnids and compound eyes in arthropods. Based on his comparative studies of the optical properties of compound and lens eyes, Exner (1891) concluded that both types of eyes are optimally adapted for different functions: lens eyes with their high angular resolution seem to more useful for pattern recognition, whereas the compound eyes, with their poor resolution, are thought to be specialized for movement perception. This view is still generally accepted (see the textbooks of Scheer, 1969, Kaestner, 1972). Furthermore, the small facet diameters of the ommatidia in compound eyes seem to cause a poor absolute sensitivity (Exner, 1891; Barlow, 1952; Kirschfeld, 1966; Prosser and Brown, 1969; Snyder et al., 1973). Some insects are said, however, to have higher temporal resolution than humans (Autrum, 1948).
Article
Full-text available
We investigated the lethal effects of visible light on insects by using light-emitting diodes (LEDs). The toxic effects of ultraviolet (UV) light, particularly shortwave (i.e., UVB and UVC) light, on organisms are well known. However, the effects of irradiation with visible light remain unclear, although shorter wavelengths are known to be more lethal. Irradiation with visible light is not thought to cause mortality in complex animals including insects. Here, however, we found that irradiation with short-wavelength visible (blue) light killed eggs, larvae, pupae, and adults of Drosophila melanogaster. Blue light was also lethal to mosquitoes and flour beetles, but the effective wavelength at which mortality occurred differed among the insect species. Our findings suggest that highly toxic wavelengths of visible light are species-specific in insects, and that shorter wavelengths are not always more toxic. For some animals, such as insects, blue light is more harmful than UV light.
Article
Full-text available
Two visual sensing modalities in insects, the ocelli and compound eyes, provide signals used for flight stabilization and navigation. In this article, a generalized model of the ocellar visual system is developed for a 3-D visual simulation environment based on behavioral, anatomical, and electrophysiological data from several species. A linear measurement model is estimated from Monte Carlo simulation in a cluttered urban environment relating state changes of the vehicle to the outputs of the ocellar model. A fully analog-printed circuit board sensor based on this model is designed and fabricated. Open-loop characterization of the sensor to visual stimuli induced by self motion is performed. Closed-loop stabilizing feedback of the sensor in combination with optic flow sensors is implemented onboard a quadrotor micro-air vehicle and its impulse response is characterized.
Article
A quick guide to the elementary motion detector- a model of how a simple neural circuit can detect visual motion, developed from work on insect vision but which seems also to be relevant to vertebrate visual systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Article
This article presents a new visual-inertial tracking device for augmented and virtual reality applications and addresses two fundamental issues of such systems. The first one concerns the definition and modelling of the sensor fusion problem. Much work has been conducted in this area and several models for exploiting gyroscopes and linear accelerometers have been proposed. However, the respective advantages of each model and in particular the benefits of the integration of the accelerometer data in the filter are still unclear. A comparison of different models with special investigation of the effects of using accelerometers on the tracking performance is therefore provided. The second contribution is the development of an image processing approach that does not require special landmarks but uses natural features. The solution relies on a 3D model of the scene that is used to predict the appearances of the features by rendering the model based on data from the sensor fusion algorithm. The feature localisation is robust and accurate mainly because local lighting is also estimated. The final system is evaluated with help of ground-truth and real data. High stability and accuracy are demonstrated also for large environments.
Article
Understanding how the brain controls behaviour is undisputedly one of the grand goals of neuroscience research, and the pursuit of this goal has a long tradition in insect neuroscience. However, appropriate techniques were lacking for a long time. Recent advances in genetic and recording techniques now allow the participation of identified neurons in the execution of specific behaviours to be interrogated. By focusing on fly visual course control, I highlight what has been learned about the neuronal circuit modules that control visual guidance in Drosophila melanogaster through the use of these techniques.