ArticlePDF Available

Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

Authors:

Abstract and Figures

Combined efforts in the fields of neuroscience, computer science and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to filling this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in-silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, envi-ronments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
Content may be subject to copyright.
January 2017 | Volume 11 | Article 21
TECHNOLOGY REPORT
published: 25 January 2017
doi: 10.3389/fnbot.2017.00002
Frontiers in Neurorobotics | www.frontiersin.org
Edited by:
Quan Zou,
UnitedHealth Group, USA
Reviewed by:
Mikael Djurfeldt,
Royal Institute of Technology,
Sweden
Keyan Ghazi-Zahedi,
Max Planck Institute for Mathematics
in the Sciences, Germany
Marcel Stimberg,
Université Pierre et Marie Curie,
France
*Correspondence:
Egidio Falotico
e.falotico@sssup.it
Received: 11October2016
Accepted: 04January2017
Published: 25January2017
Citation:
FaloticoE, VannucciL, AmbrosanoA,
AlbaneseU, UlbrichS,
VasquezTieckJC, HinkelG, KaiserJ,
PericI, DenningerO, CauliN,
KirtayM, RoennauA, KlinkerG,
VonArnimA, GuyotL, PeppicelliD,
Martínez-CañadaP, RosE, MaierP,
WeberS, HuberM, PlecherD,
RöhrbeinF, DeserS, RoitbergA,
vanderSmagtP, DillmanR, LeviP,
LaschiC, KnollAC and GewaltigM-O
(2017) Connecting Articial Brains to
Robots in a Comprehensive
Simulation Framework: The
Neurorobotics Platform.
Front. Neurorobot. 11:2.
doi: 10.3389/fnbot.2017.00002
Connecting Articial Brains to
Robots in a Comprehensive
Simulation Framework: The
Neurorobotics Platform
Egidio Falotico1*, Lorenzo Vannucci1, Alessandro Ambrosano1, Ugo Albanese1,
Stefan Ulbrich2, Juan Camilo Vasquez Tieck2, Georg Hinkel3, Jacques Kaiser2, Igor Peric2,
Oliver Denninger3, Nino Cauli4, Murat Kirtay1, Arne Roennau2, Gudrun Klinker5,
AxelVonArnim6, Luc Guyot7, Daniel Peppicelli7, Pablo Martínez-Cañada8, Eduardo Ros8,
Patrick Maier5, Sandro Weber5, Manuel Huber5, David Plecher5, Florian Röhrbein5,
Stefan Deser5, Alina Roitberg5, Patrick van der Smagt6, Rüdiger Dillman2, Paul Levi2,
Cecilia Laschi1, Alois C. Knoll5 and Marc-Oliver Gewaltig7
1 The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy, 2 Department of Intelligent Systems and Production
Engineering (ISPE – IDS/TKS), FZI Research Center for Information Technology, Karlsruhe, Germany, 3 Department of
Software Engineering (SE), FZI Research Center for Information Technology, Karlsruhe, Germany, 4 Computer and Robot
Vision Laboratory, Instituto de Sistemas e Robotica, Instituto Superior Tecnico, Lisbon, Portugal, 5 Department of Informatics,
Technical University of Munich, Garching, Germany, 6 Fortiss GmbH, Munich, Germany, 7 Blue Brain Project (BBP), École
polytechnique fédérale de Lausanne (EPFL), Genève, Switzerland, 8 Department of Computer Architecture and Technology,
CITIC, University of Granada, Granada, Spain
Combined efforts in the elds of neuroscience, computer science, and biology allowed
to design biologically realistic models of the brain based on spiking neural networks.
For a proper validation of these models, an embodiment in a dynamic and rich sensory
environment, where the model is exposed to a realistic sensory-motor task, is needed.
Due to the complexity of these brain models that, at the current stage, cannot deal with
real-time constraints, it is not possible to embed them into a real-world task. Rather,
the embodiment has to be simulated as well. While adequate tools exist to simulate
either complex neural networks or robots and their environments, there is so far no tool
that allows to easily establish a communication between brain and body models. The
Neurorobotics Platform is a new web-based environment that aims to ll this gap by
offering scientists and technology developers a software infrastructure allowing them to
connect brain models to detailed simulations of robot bodies and environments and to
use the resulting neurorobotic systems for insilico experimentation. In order to simplify the
workow and reduce the level of the required programming skills, the platform provides
editors for the specication of experimental sequences and conditions, environments,
robots, and brain–body connectors. In addition to that, a variety of existing robots and
environments are provided. This work presents the architecture of the rst release of the
Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain
Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to
1 https://www.humanbrainproject.eu.
2
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
1.INTRODUCTION
Developing neuro-inspired computing paradigms that mimic
nervous system functions is a well-established eld of research
that fosters our understanding of the human brain. e brain
is a complex structure, and designing models that can mimic
such a structure is particularly dicult. Modeling brain function
requires understanding how each subsystem (sensory, motor,
emotional, etc.) works, how these subsystems interact with
each other, and, as a whole, how they can generate complex
behaviors in the interaction with the environment. Moreover, it
is well known that during development the brain is molded by
experience and the environment (Beneel and Greenough, 1998;
Briones et al., 2004). us, studying and validating models of
brain function requires a proper embodiment of the brain model
as well as a dynamic and rich sensory environment in which the
robot–brain ensemble can be embedded and then be exposed to
a realistic sensory-motor task. Since advanced brain models are
too complex to be simulated in real time, the researcher is faced
with a dilemma. Either the brain model is simplied until it can
be simulated in real time. In this case, the brain model can be
embedded in a physical robot, operating in the real world, but
the complexity of the brain models that can be studied is highly
limited. Or the complexity of the brain model is maintained. In
this case, there are no limits on the brain models; however, it is
now no longer possible to embed the brain into a real-world task.
Rather, the embodiment has to be simulated as well.
While adequate tools exist to simulate either complex neural
network models (Gewaltig and Diesmann, 2007) or robots and
their environments (Koenig and Howard, 2004), there is so far
no tool that allows researchers to easily connect realistic brain
models to a robot and embed it in a sensory-rich environment
model.
Such a tool would require the capability of orchestrating and
synchronizing both simulations as well as managing the exchange
of data between them. e goal of such simulations is to study and
quantify the behavior of models of the brain. As a consequence,
we do not only need a complex, realistic experimental environ-
ment but we also need a controllable and measurable setup where
stimuli can be generated and responses can be measured. In fact,
real environment complexity and parameters are intrinsically
dicult or even impossible to control. In addition, models of
brain functions, designed to properly reproduce brain activ-
ity at dierent levels could not be executed in real time due to
complex neuron dynamics and the size of the network (Kunkel
etal., 2014). is is the reason why we propose to use a digital
simulator implementing realistic scenarios. e main restriction
we propose is to have a simulator that could run at a “slower” time
(limited by the computation time required by the brain simula-
tion) and also that the time can be sampled in discrete intervals
without compromising the simulation quality.
e idea behind this approach is providing a tool chain, which
grants researchers’ access to simulation control as well as state-
of-the-art tools such as models of robot and brain and methods
to connect them in a proper way (i.e., connecting spiking neural
networks to robotic sensors and actuators). A rst approach used
to connect spiking neural networks and robots has been presented
by Gamez etal. (2012). iSpike is a C++ library that provides an
interface between spiking neural network simulators and the
iCub humanoid robot. It uses a biologically inspired approach
to convert the robots’ sensory information into spikes that are
passed to the neural network simulator, and it decodes output
spikes from the network into motor signals that are sent to control
the robot. Another communication interface named CLONES
(Voegtlin, 2011) between a neural simulator [BRIAN (Goodman
and Brette, 2008)] and SOFA, a physics engine for biomedical
applications (Allard etal., 2007), has been developed using shared
memory and semaphores. e most similar system to iSpike and
CLONES is the interface that was created for the CRONOS and
SIMNOS robots (Gamez etal., 2006) which encoded visual and
proprioceptive data from the robots into spikes that were passed
to a spiking neural network simulated in SpikeStream. Spiking
motor output from the network was transformed back into real
values that were used to control the robots. is system was used
to develop a spiking neural network that controlled the eye move-
ments of SIMNOS, learnt associations between motor output
and visual input, and used models of imagination and emotion
to avoid negative stimuli. All these systems provide an interface
toward specic robotic platforms able to deal with spiking/digital
inputs and convert them appropriately. Together with robotic
platform restrictions, they do not provide a framework for the
conversion, allowing the user to write his own transfer function.
A more generic system which permits dealing with simulated
robotic platforms is AnimatLab (Cofer etal., 2010b). AnimatLab
currently has two dierent neural models that can be used. One is
an abstract ring rate neuron model, and the other is a more real-
istic conductance-based integrate-and-re spiking neural model.
It is also possible to add new neural and biomechanical models as
design and run basic experiments in neurorobotics using simulated robots and simulated
environments linked to simplied versions of brain models. We illustrate the capabilities
of the platform with three example experiments: a Braitenberg task implemented on a
mobile robot, a sensory-motor learning task based on a robotic controller, and a visual
tracking embedding a retina model on the iCub humanoid robot. These use-cases allow
to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in
neuroscientic experiments.
Keywords: neurorobotics, robot simulation, brain simulation, software architectures, robot programming, web
technologies
3
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
plug-in modules. ere are several dierent joint types and a host
of dierent body types that can be used. Although AnimatLab
does not provide a comprehensive set of neurons and learning
models, some behavior implementation based on this tool is
available such as locust jumping (Cofer etal., 2010a) or dominant
and subordinate craysh (Issa etal., 2012). Despite some of the
mentioned tools represents a good attempt to connect articial
brains to robots, these are not very common in the robotic and
neuroscientic communities likely due to the limitations we have
underlined (robotic platform restrictions, lack of a framework for
conversions). For our framework, we decided to rely on widely
used simulators for the brain models as well as for robots and
environments. is strategic choice should allow to easily attract
users of these platforms. We embedded these simulators in a
comprehensive framework that allows the user to design and
run neurorobotic experiments. In line with our approach, Weidel
et al. (2015, 2016) proposed to couple the widely used neural
simulation tool NEST (Gewaltig and Diesmann, 2007) with the
robot simulator Gazebo (Koenig and Howard, 2004), using the
MUSIC middleware (Djurfeldt etal., 2010).
Here, we describe the rst release of the HBP Neurorobotics
Platform, which oers scientists and technology developers a set
of tools, allowing them to connect brain models to detailed simu-
lations of robot bodies and environments and to use the resulting
neurorobotic systems in in silico experiments and technology
development. e Neurorobotics Platform (NRP) also provides
a comprehensive development framework including editors for
creating experiments, environments, and brain and robot mod-
els. ese tools are accessible via the web allowing them to use
the platform without tedious installation of soware packages.
Moreover, through the web, researchers can collaborate and share
their models and experiments with their colleagues or with the
scientic community.
Although the capabilities to model virtual robots and envi-
ronments already exist as conrmed by the mentioned works,
and although various labs have created closed-loop setups with
simple brain models (Ros etal., 2006; Denoyelle etal., 2014), this
platform is the rst to allow the coupling of robots and detailed
models of the brain. is makes it possible to perform experi-
ments exploring the link between low-level brain circuitry and
high-level function.
e aim of this platform is twofold: from one side, the platform
can be used to test neuroscientic models of brain areas, or even
reconstruction of these areas based on neurophysiological data;
on the other side, roboticists can take advantage of such a plat-
form to develop more biologically inspired control architectures.
e physical and neural simulation are properly synchronized,
and they exchange data through transfer functions that translate
sensory information coming from the robot (camera image,
encoders, etc.) into input for the brain (current and spikes) from
one side and the network output into motor commands from the
other. Additionally, the platform also provides a web interface,
so that it can be easily accessed and used from a broader user
base. From this web interface, the user can also access the editors
that are used to construct experiments from scratch and run the
experiments without any soware installation, beneting from
the available computing and storage platforms that have been
made available to support the NRP. erefore, the NRP provides
a complete framework for neurorobotics experiment design and
simulation. One of the pillars of the NRP development is the reuse
and extension of existing soware, thus many components were
implemented using suitably chosen existing soware.
2.PLATFORM REQUIREMENTS
2.1.Functional Requirements
In order to obtain the functional requirements for the NRP, we
rst determined which features are needed for the creation of a
neurorobotic experiment. In that, we followed soware engineer-
ing concepts and terminologies to itemize platform features as
requirements (IEEE, 1998). ese features can be divided into
two categories: design features and simulation features, each with
its own functional requirements.
During the design of a neurorobotic experiment, the user
should be able to dene all of its properties, and this includes
• the design of a suitable Robot model, by dening both kine-
matic and dynamic properties as well as the appearance, either
from scratch or from preexisting models;
• the possibility to create a rich Environment models in which
the robot can operate, by using a library of objects;
• the design of a Brain model, either from scratch or by selecting
an existing model, that will be coupled to the robot;
• Brain–Body Integration, in order to specify how the brain
model and the robot should be coupled in terms of sensory
and motor data exchange to create a Neurobot;
• the capability to change dynamic properties of the Experiment
itself, like dening events that can be triggered during the
simulation and appropriate response behaviors.
When all properties are dened, the simulation can start.
During the execution, the NRP should provide
• World maintenance and synchronization mechanisms in
order to not only simulate both the physics and neural models
but also to synchronize the two simulations and exchange data
between them, providing a closed-loop control mechanism, as
dened in the design phase. It must be possible to start, pause,
stop, and reset the simulation. e simulation should react to
the triggered events previously dened;
• a proper Interactive visualization of the running simulation,
comprising a GUI and utilities to see details of the simulation
like brain activity or robot parameters. Moreover, the user
should be able to live edit and interact with the simulation once
it is started, using the same design features described above.
A complete list of functional requirements can be found in
Appendix A, while an overview of the platform functionalities is
shown in Figure1.
2.2.Non-functional Requirements
Several non-functional requirements were also dened:
• usability and user experience—the platform should be easily
accessible to a wide range of users that possibly have no
experience in either the neuroscientic or robotic elds. is
BrainBrain-Body
Integraon RobotEnvironment
ngiseD/ gnitidEnoitalumiS
Interacve
Visualizaon
Experiment
Synchronizaon
mechanism
Neurobot
FIGURE 1 | Functional overview of the Neurorobotics Platform. Using the design/editing features of the platform, the user is able to create a neurorobotic
experiment comprising of a brain model integrated with a robotic body (Neurobot) that interacts in a dynamic environment. The experiment is then simulated by a
synchronized neural-physics simulation, and the results can be displayed in an interactive fashion.
4
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
should be achieved by a user-centric design with intuitive
tools and a consistent user experience. Moreover, the platform
should also provide an additional user level in order for expert
users to have more detailed design capabilities.
• open source—the NRP should rely on existing building blocks,
and in particular on open source ones, as the platform has to
be released to a wide audience.
• interoperability—each soware component that allows to save
or load data should use, wherever possible, well-known data
formats.
• soware quality—in order to ensure soware quality, the
development of the platform should follow soware engineer-
ing practices such as keeping a task tracking system, using
version control with code review and continuous builds, and
employing standard soware development methodologies.
2.3.Integration with Other HBP Platforms
e NRP is one of six platforms developed in the Human Brain
Project. In addition to the Neurorobotics Platform, the HBP devel-
ops a Neuroinformatics Platform, a Brain Simulation Platform, a
High Performance and Data Analytics Platform, a Neuromorphic
Computing Platform, and a Medical Informatics Platform. Most
of these oer their services through the web and are built on top
of a common set of APIs and services, called HBP Collaboratory
Portal. It provides the following services:
• Authentication, access rights, and user proles. e users are
provided with a Single Sign-On mechanism so they can use
the same credentials to access every HBP platform.
• Document repository. e users have access to a document
repository in which they can store and manage their projects.
It supports one of NRP’s requirements, namely, the possibility
for the users to share their models (brain, connections, envi-
ronment, robots, or experiments) with team members.
• Collaboratory API. A web-based application with associated
libraries allowing every platform’s web interface to have the
same look and feel, and to be implemented as a plugin within
the Collaboratory Portal.
All the HBP platform should provide some level of integra-
tion among each other. For this reason, short-term future
development plans include the integration of the Neurorobotics
Platform with the Brain Simulation Platform, the Neuromorphic
Computing Platform, while in the long-term integration with the
High Performance Computing and Analytics Platform will also
be provided.
e Brain Simulation Platform aims at providing scientists
with tools to reconstruct and simulate scaold models of brain
and brain tissue using data from within and outside the HBP.
e Brain Simulation Platform will be integrated with the NRP
for simulating brain models at various detail levels. Moreover,
alongside the Brain Simulation Platform, scaold brain models
will be gathered and they will be available for usage in the
platform.
e Neuromorphic Computing Platform provides remote
access to large-scale neuromorphic computing systems built in
custom hardware. Compared to traditional HPC resources, the
neuromorphic systems oer higher speed (real time or acceler-
ated simulation time) and lower energy consumption. us, the
integration of the platform will provide an alternative neural
simulation backend more suitable for simulations that require
a high computational burden, such as in experiments involving
plasticity and learning.
3.SOFTWARE ARCHITECTURE
e Neurorobotics Platform is based on a three-layer architec-
ture, shown in Figure2.
e layers, starting from the one furthest from the user, are
the following:
1. the soware components simulating the neurorobotics
experiment;
2. the REST server or Backend;
3. the Experiment Simulation Viewer (ESV), a graphical user
interface, and the Robot Designer, a standalone application
for the design of physical models.
Experiment
Simu laon Viewer
+
Editors
BackendClosed Loop En gine
+
Brain Interface & Bo dy
Integrator
H
TTP
High level
simu lao n control
Low level
simu lao n control
World simulator
Brain simulator
Simu laon diagnoscs
World interacon
Robot control
Brain control
Robot
Desi gner
Standalone: provides
robot mode ls to be
used withi n th e
plaorm.
Brain communicaon
Robot commun icao
n
FIGURE 2 | Architectural overview of the platform. From left to right, three layers can be distinguished: the user interface (Experiment Simulation Viewer), the
services connecting the user interface to the simulations (implemented in the Backend), and the internal computations, comprising the two simulations and the
synchronization between them.
5
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
e rst layer comprises all the soware components that
are needed to simulate a neurorobotics experiment. e Wor ld
Simulation Engine (WSE) is responsible for simulating robots and
their environment. e Brain Simulator is responsible to simulate
the neural network that controls the robot. e Closed Loop
Engine (CLE) implements the unique logic of each experiment
and orchestrates the interaction between the two simulators and
the ESV.
e second layer contains the REST server, also referred to
as Backend, which receives requests from the ESV and forwards
them to the appropriate components, which implements the
requested service, mainly through ROS. e REST server thus
acts as a relay between the graphical user interface (the frontend),
and the various simulation engines needed for the neurorobotics
experiment. For practical reasons, the services provided by the
REST server are tightly coupled with the high-level functionality
shown in the ESV GUI. us any graphical control interacting
with the REST server has a corresponding service. Actions that
change the state of the simulations, such starting, stopping, or
pausing a simulation, are implemented as a single parametric
service.
e ESV is the web-based graphical user interface to all
neurorobotics experiments. Using the ESV, the user can control
and visualize neurorobotics experiments. e ESV also provides
a number of editors to congure the experiment protocol as well
as the parts of the experiment such as the environment, the brain
model, and the connection between brain and robots (Brain
Interface and Body Integrator). e Robot Designer is a tool that
was developed to allow the process of designing robot models that
can be included in simulation setups executable on the NRP. is
tool is developed as a plugin for the 3D modeling tool Blender 3D.2
3.1.Brain Simulator
e goal of the Brain Simulator is to simulate a brain circuit,
implemented with a spiking neural network (SNN).
2 https://www.blender.org/.
Several simulators for SNNs exist, with dierent levels of detail,
ranging from more abstract point neuron simulations, which
consider neural networks as directed graphs, to the morphologi-
cally accurate ones where the properties of axons and dendrites
are taken into account.
Inside the NRP, the simulator currently supported is NEST
(Gewaltig and Diesmann, 2007), a point neuron simulator with
the capability of running on high-performance computing plat-
forms, that is also one of the simulation backends of the Brain
Simulation Platform. NEST is supported through the use of the
PyNN abstraction layer (Davison etal., 2008) that provides the
same interface for dierent simulators and also for neuromorphic
processing units, i.e., dedicated hardware for the simulation of
SNN such as SpiNNaker (Khan et al., 2008), provided by the
Neuromorphic Computing Platform. Both NEST and PyNN
provide convenient mechanisms to design neural networks.
Furthermore, they are among the most used tools in the neurosci-
entic community. On the other hand, the only APIs they provide
are written in Python, which heavily constraints the choice of the
language to use for interacting with them.
3.2.World Simulator
In order to have realistic experiments, the accurate brain simula-
tion must be coupled with a detailed physics simulation. e
World Simulator component aims at delivering a realistic simula-
tion for both the robot and the environment in which the robot
interacts.
Gazebo was chosen as the physics simulator. It oers a multi-
robot environment with an accurate simulation of the dynamics,
in particular gravity, contact forces, and friction. is dynamic
simulation can be computed with dierent supported soware
libraries like ODE (Drumwright, 2010) and Bullet (Coumans
etal., 2013).
Any communication with the simulated robot and control of
the simulation itself is done through the Robot Operating System
(ROS) (Quigley et al., 2009), which is natively integrated with
Gazebo.
6
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
ROS is a widely used middleware in the robotics community
and provides C++ and Python APIs to the user.
3.3.Brain Interface and Body Integrator
e Brain Interface and Body Integrator (BIBI) plays a crucial role
in the NRP, as it is the component that implements the connec-
tion between the robot and brain simulations. e main feature of
the BIBI is the Transfer Function framework. A Transfer Function
(TF) is a function that translates the output of one simulation
into a suitable input for the other. us, we can identify two main
types of transfer functions: the Robot to Neuron TFs translate
signals coming from robot parts such as sensor readings and
camera images into neuron signals such as spikes, ring rates, or
electric currents; the Neuron to Robot TFs convert neural signals
from individual neurons or groups of neurons into control signals
for robot motors. us, these two kinds of transfer functions close
the action–perception loop by lling the gaps between the neural
controller and the robot.
e TFs also extend beyond the previously described two
types. For example, the robot–brain–robot loop can be short-
circuited in order to bypass the brain simulation and use only
a classical robotic controller, thus resulting in a Robot to Robot
TF. is allows the comparison between a classical and a neural
implementation of a robotic controller with the same setup, by
simply switching from a transfer function to another. Moreover,
the data coming from both simulations can be sent out of the loop
(to a monitoring module) where it can be live plotted, elaborated,
stored, or exported for data analysis with external tools (Robot to
Monitor and Neuron to Monitor TFs).
In order to provide a proper abstraction layer toward the
simulators, generic interfaces are provided, which are then
implemented by specic adapters. From the robot simulator side,
the interface is modeled following the publish–subscribe design
pattern (Gamma et al., 1995), where, from one side, sensory
information is expected to be published by the robotic simulator
and the Robot to Neuron TF subscribes to the subject, receiving
the data, while on the other side the Neuron to Robot TF publishes
motor commands and the simulator is expected to subscribe and
execute them. is pattern is used by many robotics middlewares
such as ROS and YARP (Metta etal., 2006), thus there is minimal
work required in order to implement the adapters in such cases.
In the current implementation of the NRP, ROS Topic adapters
have been implemented. From the brain simulation side, the TFs
provide stimuli and measurements by using Devices. Devices are
abstract entities that have to be connected to the neural network,
either to a single neuron or to a neuron population. Among such
entities, there are spike generators and current generators (for the
input side), and spike recorders, population rates recorders, and
leaky integrators (for the output side). In the current implementa-
tion, devices are implemented as wrappers around PyNN objects
instances, providing general interfaces toward dierent neural
simulators.
e TF framework is implemented using the Python program-
ming language, where the business logic of each TF resides in a
function denition. A library of commonly used transfer func-
tions, including common image processing primitives and simple
translational models for motor command generation, is provided
alongside with the framework. Information about the TF connec-
tions is specied via a custom Domain Specic Language (DSL)
implemented with Python decorators that specify the type of
transfer function, the device types, and the neuron which they are
connected to, and the topics that the TF should subscribe to, or
on which topic the TF should publish (Hinkel etal., 2015, 2017).
An example of a transfer function implementation is displayed
in Listing 1.
LISTING 1 | An example of transfer function code, translating an image
into spike rates.
@nrp.MapRobotSubscriber(“camera”, Topic(’/robot/Camera’,
sensor_msgs.msg.Image))
@nrp.MapSpikeSource(“red_left_eye”, nrp.brain.
sensors[0:3:2], nrp.poisson)
@nrp.MapSpikeSource(“red_right_eye”,nrp.brain.
sensors[1:4:2], nrp.poisson)
@nrp.Robot2Neuron()
def eye_sensor_transmit(t, camera, red_left_eye,
red_right_eye):
image_results = hbp_nrp_cle.tf_framework.tf_lib.
detect_red(image=camera.value)
red_left_eye.rate = image_results.left
red_right_eye.rate = image_results.right
In this example, it can be seen that through the use of the
decorators DSL several properties are specied, such as the type
of TF (Robot to Neuron), the devices toward the brain simula-
tion (spike generators ring with Poisson statistics attached to
the neuron population) and the input coming from the robotic
simulation (camera image published through a ROS topic). It
can also be noticed that the actual business logic is implemented
inside the function, and in particular, the image is processed
with a color detection lter implemented as part the TF library
provided alongside the platform.
e choice of Python for the TF framework was the most
natural one, given the fact that both the chosen physics and
neural simulators provide Python APIs. Consequently, the rest
of the server side NRP components have been written in Python.
In principle, this could raise performance issues when compared
with languages like C++. We chose to avoid ne tuning of the
performance of the developed components, as currently the
bottlenecks of a simulation reside in the physics and neural
simulators. is choice has also the advantage of simplifying
considerably the development process.
Internally, the complete BIBI conguration, comprising the
transfer functions, the robot model, and the brain model, is
stored as an XML le. Each transfer function can be saved either
as Python code in an XML tag or can be constructed from custom
XML elements which are later parsed in order to generate the
equivalent Python code. e second way of describing these func-
tions is better suited for the automatic generation of such XML
les, via graphical editors that could be used also by scientists
with no experience in Python.
3.4.Closed Loop Engine
e Closed Loop Engine (CLE) is responsible for the control of
the synchronization as well as for the data exchange among the
simulations and the TFs. e purpose of the CLE is to guarantee
createdinialized startedpaused
stopped
halted
inializepaus estart
stop
stop start
reset
stop
failed
FIGURE 4 | Lifecycle of a simulation in the NRP. During a normal cycle,
the simulation will start from the created state, passing through initialized as
soon as the resources are instantiated, then going through the started state
once the execution is initiated, and nally in the stopped state. During the
execution, the simulation can be paused at any time, while if any error occurs
during the normal lifecycle the simulation is halted.
World sim.
R-to-N TFCLE N-to-R TF
Brain sim.
Simulate(dt)
Simulate(dt)
Call-TF()
GetStatus()
Call-TF()
GetStatus()
Update()
Update()
FIGURE 3 | Synchronization between the components of a simulation,
as orchestrated by the CLE. In a rst phase, the two simulations are run in
parallel. Afterward, each transfer function gathers data from simulations and
computes the appropriate inputs for the next simulation step.
7
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
that both simulations start and run for the same timestep, and
also to run the TFs that will use the data collected at the end of the
simulation steps. Figure3 shows a sequence diagram of a typical
execution of a timestep: aer the physics and neural simulations
have completed their execution in parallel, the TFs receive and
process data from the simulations and produce an output which
is the input for the execution of the next step. e idea behind the
proposed synchronization mechanism is to let both simulations
run for a xed timestep, receiving and processing the output of
the previous steps and yielding data that will be processed in the
future steps by the concurrent simulation. In other words, data
generated by one simulation in the current timestep cannot be
processed by the other simulation until the next one. is can
be read as the TFs introducing a delay of sensory perception and
motion actuation greater than the simulation timestep.
We decided not to use MUSIC for the synchronization in this
rst release, even if it was shown to be working by Weidel etal.
(2015, 2016), in order to ease the communication between brain
and world simulations without introducing any middle layer.
Moreover, relying on the already existing Python APIs for the
communication with the two simulators had the eect of simplify
the development process.
Besides orchestrating running simulations, the CLE is also
responsible of spawning new ones, by creating new dedicated
instances of the World Simulator and the Brain Simulator, and a
new instance of the orchestrator between the two.
3.4.1.Simulation Control
During its life cycle, each simulation transitions through several
states, as depicted in Figure4. At the beginning, a simulation
is in state created, and it will switch to state initialized once the
CLE is instantiated. Up to this point, no simulation steps have
been performed yet. Once the simulation is started, the CLE will
start the interleaving cycle that can be temporarily interrupted by
pausing the simulation (paused) or preemptively terminated by
stopping the simulation (stopped). If any error occurs during the
execution or during the transitions between states, the simulation
will pass automatically to the state halted. e reset transition can
be considered parametrized, as it allows restoring to their initial
status separate parts of the simulation singularly. Currently, the
resettable parts in a simulation are the robot pose, the brain
conguration, and the environment.
anks to the possibility of pausing and restarting the closed
loop cycle during the simulation execution, it was possible to add
features that modify simulation properties at runtime, without
the need to restart the simulation from scratch. ese features
include support for transfer function adding, editing and removal,
brain model, and environment editing. Using these features, it
is possible to test dierent congurations of the simulation and
immediately see the eects of them, without having to wait for a
complete restart.
From the point of view of the implementation, the timestep of
the physics simulation is sent to Gazebo through a ROS service
call, while the brain simulation is directly run for the desired
timestep with a PyNN call, as it can be observed from the archi-
tecture depicted in Figure5. ROS service calls and the PyNN calls
are implemented through generic adapter interfaces and perform
a client-server interaction. Hence, in principle, a CLE instance
can interact with dierent simulators than the ones currently
supported (Gazebo and NEST). is abstraction layer, besides
providing the possibility to change with relative ease the underly-
ing simulators, simplies the update process of the simulators, by
limiting the number of les that need to be changed in response
to a possible API update.
3.4.2.State Machines for Simulation Control
In real experiments, it is oen the case that the environment
changes in response to occurring events, generated by the behav-
ior of the subject, by the experimenter or automatically generated
(i.e., timed events). us, in order to reproduce this behavior, the
possibility to generate events that can inuence the environment
was added to the platform. In particular, the user can interact
with some objects without having to interrupt the simulation,
like changing the brightness of lights or screen colors, and an
event system is provided. e event system is implemented with
a state machine that is programmable by the user. In the current
implementation, support for timed events is provided, allowing
FIGURE 5 | Architectural overview of the CLE and of the communication layers. The CLE orchestrates the two simulations and performs the data exchange
through generic adapter interfaces. It also provides two interfaces, one for controlling an ongoing simulation and one for providing a new one, by instantiating a
neural simulator and a physics simulator. In the current implementation, adapters for accessing Gazebo physics simulation via ROS and for accessing NEST neural
simulation via PyNN are provided. In particular, robot data are accessed through ROS services and topics, and the physics simulation is controlled through ROS
services.
8
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
the user to program changes in the environment that have to
occur at specic points in time.
e event system is managed by the State machines manager,
implemented using the SMACH state machine framework
(Bohren and Cousins, 2010) that is already integrated into
ROS. Using such a framework, it is possible to program timed
events that directly call Gazebo services in order to modify the
environment.
3.5.Backend
e Backend is the component connecting the user interface
to the core components of the platform, exposing a web server
implementing RESTful APIs on the user interface end point and
forwarding processed user requests via ROS on the other end
point. is component is the rst handler for user requests. In
case they could not be completely managed within the backend,
they are forwarded either to the CLE or to the State machines
manager that will eventually complete the request processing,
interacting, if necessary, with the simulators. An overview of the
Backend architecture and of the interaction with other compo-
nents is depicted in Figure6.
Actions provided by the backend to the user interface (ESV)
include experiment listing and manipulation, simulation listing,
handling and creation, and gathering of backend diagnostic and
information.
Every available experiment on the platform is identied by
a name and a group of conguration les, including a preview
image to be showed on the ESV and les representing environ-
ment, brain, state machines, and BIBI, where neural populations
and transfer functions are stored. Experiment listings and manip-
ulation APIs allow the user to list all the available experiments
on the server as well as retrieving and customize singularly the
conguration les of the experiment.
In the NRP setting, a simulation is considered as an instance of
one of the available experiments. In order to create a new simula-
tion, the user has to proceed in a dierent way depending on
whether the NRP is accessed within or outside of the Collaboratory
Portal. If users are accessing from the Collaboratory Portal, they
are able to clone the conguration les related to one of the avail-
able experiments on the Collaboration storage they are using
and instantiate that local copy of the experiment. e backend
allows users to overwrite said conguration les as well as saving
CSV recordings of simulation data directly on the storage. In case
a user is not working from the Collaboratory Portal, they can
instantiate an experiment choosing directly from the experiment
list, and they can edit it without having to instantiate a local copy.
Once a simulation is created, the backend allows the user to
retrieve and change its current state according to the simula-
tion lifecycle depicted in Figure4, by interfacing with the CLE.
Other APIs provide functionalities for retrieving and editing
at runtime the brain conguration, the state machines, and
the transfer functions, delegating again the task to the CLE.
Furthermore, information about the simulation metadata,
brain populations, and environment conguration is available
through dedicated APIs.
For diagnostic purposes, the backend provides some APIs for
retrieving the errors which have occurred on the server as well as
the version of the backend itself and the CLE.
3.6.Experiment Simulation Viewer
e Experiment Simulation Viewer is the user interface to the NRP.
It is implemented as a web-based application, developed using a
modern web soware stack exploiting established open-source
Simu laonservice
Simu laon controller
Experiment service
Watc hdog
Simu laon
provider
Simu laon
contro ller
ROS
ROS
State
machines
manage r
ROS
Gaze bo
ROS
ROS
create new
simu laon
simu laon
start, stop,
pause, ...
state machine
start, stop,
reset
transfer
func on add,
edit , dele te
simulaon
creaon API
si
mulaon
co
ntrol API
expe ri me nt
informaon API
server info &
diagnoscs API
REST
REST
REST
REST
Backend CLE
FIGURE 6 | Architectural overview of the Backend. User inputs coming from the ESV are sent to the Backend via REST APIs. These requests are then
dispatched to the CLE or to the State machine manager that will handle them by forwarding the appropriate commands to Gazebo.
9
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
soware. e ESV is currently integrated in the Collaboratory
Portal (see Figure7A) using the Collaboratory APIs. By building
it using standard web technologies, cross-platform support, also
for mobile devices, is enabled. e downside of this choice is the
added complexity of using translation layers, albeit lightweight
ones, for the interaction with server-side components.
e ESV simulation interface embeds a 3D view that allows
the user to see and navigate through the virtual environment,
and a user bar for simulation control (e.g., for playing, pausing,
resetting, or stopping the ongoing simulation). It also provides
means for editing objects by altering their attributes and monitor-
ing brain activity and the state of the embodiment, on a running
simulation. Furthermore, the simulation interface hosts the tools
that allow the user to design and edit an experiment, explained in
depth in Section 3.6.3. Any modications to the running simula-
tion can be exported either on the user computer or saved on
Collaboratory storage.
In the following sections, we start presenting the ESV user
interface, its architecture, and then we continue describing the
design tools.
3.6.1.User Interface
Entering the ESV, the user is presented with a list of available
experiments (see Figure7A). For each experiment, the user can
choose to launch a new simulation, or to join an already launched
one as a spectator; it is also possible to launch an instance of an
existing simulation while uploading a custom environment in
which it will be executed, thus replacing the original one.
e user starting a simulation is called the owner of that simu-
lation whereas any other user is called a watcher. e owner has
full control over his simulation, being able to start, pause, stop, or
reset the simulation and interact with the environment while it
is running. Other features like monitoring or navigation into the
scene are accessible to both owners and watchers.
Of particular interest are the monitoring features (Figure7B).
e Spike Monitor plots a spike train representation of the
monitored neurons in real time. Monitored neurons must be
specied by transfer functions, as described in Section 3.3.
e Joint Monitor plots a line chart representation of the
joint properties of the robot in real time. For every joint selected,
properties like position, velocity, and eort can be chosen.
e goal of these monitoring tools is to get live insights on how
the simulation performs. Both spike data and joint data can also
be saved in CSV format for further o-line analysis, see Section
3.6.3.3.
3.6.2.Architecture
In order to have a coherent user interface and experience through-
out, all the tools developed in the Human Brain Project, including
the NRP User interface are implemented as web applications. An
architectural overview is shown in Figure8.
e application framework of choice is AngularJS.3 AngularJS
is a Model View Controller (MVC) Web Framework for devel-
oping single-page applications. Using AngularJS services, the
interaction with the NRP Backend, which provides the API for
the simulation control, is realized via standard REST calls.
e Rendering of the 3D view of the virtual environment is
performed by Gzweb, Gazebo’s WebGL client. It comprises two
main parts: gz3d and gzbridge, which are, respectively, respon-
sible for visualization and for communicating with Gazebo’s
backend server gzserver.
To enable the communications with the CLE and Gazebo via
ROS, the ESV employs roslibjs, a JavaScript library. Roslibjs in
turn interacts via WebSockets with rosbridge, a tool providing a
JSON API to ROS functionality for non-ROS programs.
3.6.3.Editors
In order to design the experiment to be simulated, the NRP
provides the user with a complete array of tools. anks to these
3 https://www.angularjs.org/.
FIGURE 7 | The Experiment Simulation Viewer. Through the Collaboratory Portal (A), the user can choose between several predened experiments or create his
own experiment based on templates. Some of the features of the Collaboratory Portal such as a navigation pan and a group chat are shown. (B) The ESV Main
View, where an experiment is being executed. The user interface at the bottom allows the user to interact with the simulation and displays information about
simulation time. Some widgets, togglable from the user bar, allow the user to monitor brain activity in term of spike trains or joint values of the simulated robot.
10
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
ESV core
gz3d
roslibjs
Backend
gzbridge Gazebo
rosbridge
CLE
websocket
websocketGa zebo API
/mon itor/spike_recor der
/ros_c le_simulaon/status
/ros_c le_simulaon/cle_error
ROS
ROS
Angu lar JS
Angu lar JS
GU
I
Join t states
Robot cont rol channels
Robot sensor channel s
Custom scene control
ESV
REST
ROS
FIGURE 8 | Architectural overview of the ESV and its outgoing communications. The ESV core interacts with the Backend via REST APIs to control the
simulation. Two websocket channels bypass the Backend and allow the ESV to interact directly with Gazebo and the CLE. The gzbridge channel is used for
gathering information about the scene rendering. The rosbridge channel is used for collecting information related to brain spikes and joint angles, which is shown on
appropriate monitoring widgets in the GUI, and to get information about simulation status and possible errors.
11
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
tools, it is possible to congure all the aspects of an experi-
ment: Environment, Transfer Functions, Brain, and Experiment
Work o w.
3.6.3.1.Environment Editor
e purpose of the Environment Editor is to allow the user of
the platform to set up the scene in which the simulation will
run, either starting from scratch or editing one from an existing
experiment.
e Environment Editor is seamlessly integrated into the ESV
application: this dramatically shortens the time needed to proto-
type the experiment. Switching between simulation and editing
the environment is a very fast process: the user can immediately
simulate the interaction of the robot with the new environment
and, if not satised, directly modify it again.
While running the environment editor, the user is able to
move (e.g., translate or rotate) or delete existing objects in the
scene, or to place new objects by choosing them from a list of
models (Figures9A and 10).
When the editing of the scene is completed, the user can export
the result into the Simulation Description Format (SDF),4 either
on a local workstation or on the Collab storage of the respective
experiment. Once saved, the environment can be loaded at a later
time into another dierent, new or existing, experiment.
Importing a new environment in an existing experiment does
not change in any way the workow of the experiment itself, i.e.,
it will keep its transfer functions, state machines, the BIBI, and
the robot involved.
3.6.3.2.Brain Editor
e Brain Editor (Figure9D) allows the user to upload and edit
custom brain models as PyNN scripts (Davison etal., 2008).
e PyNN script describing the brain model used in the
current experiment is shown in a text editor featuring Python
keyword and syntax highlighting. It is also possible to dene
4 http://sdformat.org/.
populations (i.e., sets of neurons indices) that can be referred to
in transfer functions.
Once the user has nished editing, the new model can be
applied without restarting the whole simulation.
3.6.3.3.Transfer Functions Editor
e Transfer Functions (TFs) describe how to transform simula-
tor specic sensory data (such as image, joint angles, forces, etc.)
to spiking activity for neural network simulation and viceversa.
TFs are dened as Python scripts exploiting the DSL described
in Hinkel et al. (2015). Like for the Brain editor, the Transfer
Functions Editor (Figure9B) displays these scripts and enables
the user to change them in a text editor pane found in the menu.
From within the editor, the user can create and edit TFs as well
as save them to and load them from les. Once edited, the changes
can be applied to the simulation. us, the user can test immedi-
ately the robots’ behavior and, possibly, modify it again resulting
in a very short cycle of tuning and testing. Every uploaded transfer
function is checked for syntax errors, and several restrictions for
Python statements are applied for security reasons.
Furthermore, the user can log TFs’ data to les in the
Collaboratory storage to analyze them at a later time. e data
format used is the standard Comma Separated Values (CSV).
Like for the other editors, the edited TFs can be downloaded on
the user’s computer or saved into the Collaboratory storage.
3.6.3.4.Experiment Workow Editor
e workow of an experiment is dened in terms of events
which are either triggered by simulation time, user interaction, or
state of the world simulation. In the current implementation, all
events manipulate the simulated environment, as no stimulation
of the brain or manipulation of the brain-controlled robot can be
performed by the State machine manager.
e workow is specied in Python code exploiting SMACH
(Bohren and Cousins, 2010)—a state machine library integrated
into ROS. is approach enables users to specify complex work-
ows in terms of state machines.
FIGURE 9 | The ESV editors’ menu panes. With the Environment Editor (A) the user can add an object to the environment, choosing from a library of models.
The Transfer Function editor (B) allows a live editing of the Transfer Functions, without the need for restarting the simulation. SMACH State Machine Editor (C) that
currently implements the Experiment Workow Editor, actions that have to be performed by the State machine manager can be dened. The brain model used in the
simulation can be edited with the Brain Editor (D).
12
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
FIGURE 10 | Adding and editing a new object with the ESV Environment Editor. The user can change object properties by using appropriate handles or by
manually inserting property values (e.g., position coordinates) in a form displayed in a widget.
13
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
A state machine controlling an experiment interacts with the
running simulation by publishing on ROS topics and calling ROS
services and can monitor any simulation property published on
ROS topics (e.g., simulation time, sensor output, and spiking
activity of brain). Like the other editors, the Experiment Workow
Editor (Figure9C) displays Python scripts and allows the user to
change them in a text editor.
3.7.Robot Designer
In order to build neurorobotic experiments, the NRP not only
has to oer scientists a rich set of robot variants to choose from
but also give them the opportunity to integrate virtual counter-
parts of existing robots in their lab, or robots with the desired
morphology for a special, given task. e Robot Designer (RD)
hence aims at being a modeling tool for generating geometric,
kinematic, and dynamic models that can be used in the simula-
tion environment.
e development from scratch of a custom soware (either
web or desktop) for modeling and designing a robot is an enor-
mous undertaking, so we decided to adopt existing solutions. In
particular, no reasonable web solutions were found, and adapting
existing solutions for web would require a considerable eort
which would not be counterbalanced by the possible benets.
We chose to use Blender (a powerful and extendable open source
soware) among the existing modeling sowares, due to its avail-
ability for a wide range of platforms with a simple installation
process.
Existing extensions for Blender with similar goals were tak-
ing into account when developing the Robot Designer. Most
notably these are the Phobos project5 and the RobotEditor of the
OpenGrasp simulation suite (León et al., 2010; Terlemez et al.,
2014). e RobotEditor project was nally chosen as the basis of
the Robot Designer aer an evaluation with competing projects.
Aerward, it went through a major refactoring and has been
extended by components required for the NRP. ese include
the aspects of importing and exporting les with support for the
Gazebo simulator, additional modeling functionalities, a rened
user interface, and data exchange with the Collaboratory storage.
e RD provides users with an easy-to-use user interface that
allows the construction of robot models and dening kinematic,
dynamic, and geometric properties. e robotics-centered user
interface of the RobotEditor has been redesigned and allows the
user to dene kinematic models of robots by specifying segments
5 https://github.com/rock-simulation/phobos.
FIGURE 11 | The Robot Designer. Using the Robot Designer the user is able to edit kinematic properties of a robot model (A). An example of a completed model
of the six-legged walking machine Lauron V (Roennau etal., 2014) with a collision model with safety distances is shown in (B). Deformable meshes can be
transformed into disjoint rigid bodies for collision model generation (C), by considering the inuence of each joint onto the mesh vertexes, e.g., hip joint (D) and knee
joint (E) for a human leg.
14
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
and joints either using the Euler angles or following the Denavit–
Hartenberg convention (Denavit, 1955). e robot dynamic
model can be created through mass entities with inertia tensors
and controller type with parameters for joints. For geometric
modeling, the RD can rely on the vast 3D modeling capabilities
provided by Blender, although several additions were made for
the automation of robot-related tasks. Figure11A shows on the
le the Robot Designer panel inside Blender while editing the
properties of a segment of a six-legged robotic platform, Lauron
V (Roennau etal., 2014). e plugin provides overlays for the 3D
view that shows reference frames and names for each robot joint,
thus facilitating editing.
e original code of the RobotEditor has been heavily
refactored, and the documentation for users and developers
of the robot designer and the core framework has been greatly
expanded. e core framework oers many additional features
such as resource handling, logging to external les, debug mes-
sages with call stacks, and the concept of pre- and postconditions
checking for validation of functionality.
Data exchange with the NRP and with ROS has been a major
aspect of the development of the Robot Designer. For this reason,
support for the widespread Unied Robot Description Format
(URDF)6 le format has been added and been improved in several
ways during the development. An XML schema denition le has
been generated for this le format which then made it easier
to generate language-specic bindings7 requiring only a small
interface between internal data types and the representation in
the XML document (see Section 3.3).
In addition to exporting raw URDF les, the Robot Designer
also supports novel features unique to the robot simulation of
the NRP. Above all, this includes generating input to a Gazebo
plugin loaded by the CLE. It automatically generates soware
implementing necessary joint controllers for position and/or
velocity control. is additional information is not included in the
URDF standard and is stored together with the model in the same
le. For the user of the NRP, this means that dierent controller
types and parameters for each joint can conveniently be specied
directly in the designer and become available in the simulation
without the need of writing additional joint controller soware
and its deployment on the platform servers. e persistent storage
and data exchange mechanisms of the Robot Designer oer the
6 http://wiki.ros.org/urdf/XML.
7 https://pypi.python.org/pypi/PyXB.
TABLE 1 | Summary of quality control statistics for the NRP repositories.
Repository Total
lines
Tests Line coverage
(%)
Branch coverage
(%)
CLE 2,944 147 88 100
Backend 3,045 239 93 100
Frontend (ESV) 2,427 455 95 87
Experiment control 455 46 96 100
15
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
user the option to encapsulate models into installable and zipped
ROS packages.
Several modeling and automatization features were also added
to the already feature-rich modeling soware. Collision models
can automatically be created from the geometric model of a robot
either by computation of its complete convex hull or an approxi-
mate convex hull with a xed polygon count and an additional
safety distance to the original mesh (see Figure11).
When generating collision models for deformable geometries,
the underlying mesh, where each vertex has a linear inuence of
multiple joints, has to be transformed into several disjoint rigid
bodies. e RD can perform this transformation based on several
rules (see Figures11C–E) as it is demonstrated on the showcase
of a mesh created with the MakeHuman8 project.
Finally, automatic robot generation from the mathematical
kinematic model has been added as an experimental feature.
e Robot Designer provides an easy installation process. To
download and activate the soware, an installation script that
runs within Blender has to be executed by the user. is script,
the Robot Designer itself, and its documentation are hosted on a
publicly available repository.9
4.SOFTWARE DEVELOPMENT
METHODOLOGY
e NRP is developed within the Scrum agile soware develop-
ment framework (Schwaber and Beedle, 2002). e basic unit of
development, in Scrum parlance, is called a Sprint. It is a time-
boxed eort, which is restricted to a specic duration of either 2
or 3weeks. is methodology provides a reactive environment,
able to deal with changing requirements and architectural
specications.
e Scrum process includes daily stand-up meetings, where
each team of developers discusses about how they are working to
meet the sprint goals. At the end of the sprint, a review meeting
is held; the whole NRP team is present, and the members make
demonstrations of the soware in stable development status.
Each completed task provides a new feature to the user, without
breaking compatibility with the current code base. us, at the
end of each sprint, there is a new shippable platform that provides
new features.
e NRP soware process uses industry standards for quality
control. e acceptance criteria of the version control system
include the necessity of a code review by, at least, a second pro-
grammer, while a continuous integration system ensures that new
code does not introduce regressions by executing a set of unit tests.
Moreover, code coverage criteria ensure that at least 80% of the
code is covered during tests and coding standards are enforced by
automatic static code analysis tools (PEP8 and Pylint). Each build
in the continuous integration system also produces the soware
documentation documenting the APIs and comprising soware
usage examples. A summary of quality control statistics regarding
the main NRP repositories is presented in Tabl e1 . No repository
has PEP8 or Pylint errors.
8 http://www.makehuman.org/.
9 https://github.com/HBPNeurorobotics/BlenderRobotDesigner.
5.USE CASES FOR THE
NEUROROBOTICS PLATFORM
In order to assess the functionalities of the NRP, several experi-
ments were designed. ese experiments, albeit simple in nature,
aim at demonstrating various features of the platform. e rst
use case is just a proof of concept: a very simple brain model is
connected to a robot via TFs in order to have a complete action–
perception loop performing a Braitenberg vehicle experiment
(Braitenberg, 1986). Results show that the two simulations are
properly synchronized and the experiment is correctly performed.
en, an experiment that makes use of the TF framework
capability of implementing classic robotic controllers was
designed and implemented. In this case, the robot–brain loop is
short-circuited, and a controller implemented inside a TF is used
to perform sensorimotor learning with a robotic arm.
Finally, in order to demonstrate the extensibility of the frame-
work, an already existing computational model of the retina was
integrated inside the platform and used to perform bioinspired
image processing.
5.1.Basic Proof of Concept: Braitenberg
Vehicle
is experiment was designed in order to validate the overall
functionalities of the NRP framework. By taking inspiration
from Braitenberg vehicles, we created an experiment where a
four-wheeled robot equipped with a camera (Husky robot from
Clearpath Robotics) was placed inside an environment with two
virtual screens. e screens can display a red or blue image, and
the user can interact with them by changing their displayed image
by using the ESV. e robot behavior is to turn counterclock-
wisely until it recognizes the red color and then to move toward
the screen displaying the red image.
e overall control architecture can be observed in Figure12A.
Identication of the red color is done in a robot to neuron transfer
function where the image coming from the robot camera is pro-
cessed via a standard image processing library, OpenCV, in order
to nd the percentage of red pixels in the le and right halves of
the image. Such information is then translated into ring rates and
sent as an input to Poisson spike generator devices. ese devices
provide the input for a simple Brain Model comprising 8 neurons.
Among these, three are sensor neurons, receiving inputs from the
spike generator devices, and two are actor neurons, encoding the
generated motor commands. e behavior of the neural network
is to make one of the two actor neurons have a much higher r-
ing rate compared to the other if no input encoding red pixels is
present, while making the two neurons re with a ring rate that
A
0 10 20 30 40 50 60 70
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
Time (s)
Wheel motor commands
0 10 20 30 40 50 60 70
−100
−50
0
50
100
Time (s)
Red pixels (%)
Left wheel
Right wheel
Red Pixels (%)
B
0 10 20 30 40 50 60 70
0
1
2
3
4
5
6
7
8
9
Time (s)
Neuron ID
C
FIGURE 12 | Braitenberg vehicle experiment. The control model (A) uses a color detection robot to neuron transfer function to convert the image into spike
rates, a simple spiking neural network comprising 8 neurons and a neuron to robot transfer function that translates membrane potentials into motor commands. The
motor signals sent to the robot wheels are directly correlated with the red pixels’ percentage in the camera image (B). This is also reected by the changes in brain
activity during the trial (C).
16
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
is more similar the more red is present in the two image halves.
Two leaky integrator devices receive input from the actor neurons
and are used in the neuron to robot transfer function responsible
for the generation of motor commands. In particular, membrane
potential of these devices is used to generate motor commands
for the le and right wheels such that when the two actor neu-
rons’ ring rates dier, the wheels turn in opposite directions,
eectively turning the robot, and when the ring rates match, the
wheels move in the same direction, moving the robot forward.
e behavior of the experiment is shown in Figures12B,C,
where it can be observed that every time there is a rise in the red
percentage on the image there is an increase in the spike rate of
neuron 7 so that it matches that of neuron 8. en, the generated
motor commands change accordingly, and the wheels move in
the same direction, eectively moving the robot forward.
5.2.Classic Robot Controller:
Sensorimotor Learning
e goal of the experiment is to learn sensorimotor coordina-
tion for target reaching tasks to be used in future manipulation
experiments. In particular, the experiment aims at predicting a
forward model for an anthropomorphic arm, by estimating the
tool center point (TCP) position from the current joint congura-
tion. In its current form, the experiment consists of two phases,
repeated every iteration: in the rst phase, shown in Figure13B,
the robot explores the working space and learns its kinematics by
performing random movements (i.e., motor babbling), observ-
ing its TCP position and corresponding joint conguration; in
a second phase, the model is evaluated by moving the arm in a
random position and comparing the TCP predicted by the learnt
kinematic model with the real one. is experiment does not
use any brain model, thus it shows that the NRP also provides a
framework for implementing classic robot controllers.
e control schema of the experiment is presented in
Figure13A. e state machine for experiment control switches
between the dierent phases and communicates the current
phase to the robot controller. e robot controller implements a
supervised learning method, the Kinematic Bezier Maps (KBM)
(Ulbrich etal., 2012), and communicates directly to the simulated
robot in a robot to robot transfer function. During the learning
phase of each iteration, the robot controller moves the arm in a
random joint conguration and feeds this information, alongside
the real TCP of the attached end eector, to the KBM model.
During the evaluation phase, the arm is moved into another
Experiment Control
State Machine
Learning Evaluang
Robot Controller
Robot to robot TF
Robot
Monitoring Module
Robot to monitor TF
Current phase
Moon command
Esmated TCP
Joints stat e + TCP
Joints stat e + TCP
A
B
C
FIGURE 13 | Sensorimotor learning experiment. The control schema includes a state machine for experiment control using a classical robot controller and
monitoring module (A). The state machine switches between the two phases of the experiment (B): motor babbling phase for training, then evaluation of TCP
prediction. After each training iteration, the prediction error decreases, reaching 1cm of accuracy for TCP estimation after 40 iterations (C).
17
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
random joint conguration, and the KBM model is used to
predict the position of the new TCP. is information is sent to
the monitoring module. e monitoring module also gathers
information from the simulation, such as the real TCP and joint
values. is information can be stored, displayed, or further
processed. In particular, this information is used to compute the
accuracy of the KBM prediction. Figure13C shows the learning
curve for the training of the kinematic model, where the error is
computed as the distance in space between the predicted TCP
and the real one. It can be noticed that the error decreases during
training iterations, reaching an accuracy of 1cm.
5.3.Integration of Bioinspired Models:
Retinal Vision
In order to have full biologically inspired closed loop controllers,
the transfer functions should also make use of neuroscientic
models of sensor information processing from one side and
motion generation on the other. As a rst step in this direction, a
model of the retina was included in the NRP as a robot to neuron
transfer function (Ambrosano etal., 2016).
e model chosen for the integration was COREM, a computa-
tional framework for realistic retina modeling (Martínez-Cañada
etal., 2015, 2016), that provides a general framework capable of
simulating customizable retinal models. In particular, the simula-
tor provides a set of computational retinal microcircuits that can
be used as basic building blocks for the modeling of dierent
retina functions: one spatial processing module (a space-variant
Gaussian lter), two temporal modules (a low-pass temporal
lter and a single-compartment model), a congurable time-
independent non-linearity, and a Short-Term Plasticity (STP)
function.
e integration work proceeded by creating Python bind-
ings for the C++ COREM implementation and by adding the
appropriate functions that could feed the camera image in the
model and extract the retinal output without changing the core
implementation. Such implementation provides, as an output,
analog values representing the intensity of presynaptic currents
of ganglion cells (Martínez-Cañada etal., 2016). us, the retina
simulator now provides an interface that is callable by the transfer
function framework. Moreover, the retina model is dened via a
Python script, which can be uploaded by the user.
In order to test the proper integration of the retina simulator,
a rst experiment that involves visual tracking of a moving target
via a retinal motion recognition was designed. e environment
setup consisted of placing the simulated robot (iCub humanoid
Brain
Neuron to robot
Transfer funcon
Robot eye + camera
Moon control
Camera imag e
Color
Inform aon
Filtered ganglion cells informaon
Robot to neuron TF
embedding rena
L-conesM-cones
M-L+ bipolarM+L- bipolar
horizontal horizontal
+
--
+
++
amac rineamac rine
M-L+ ganglion
cell s
M+L- ganglion
cell s
- -
+ +
0 2 4 6 8 10
−0.3
−0.2
−0.1
0
0.1
0.2
Time (s)
Eye position
Target estimate position
B
0 2 4 6 8 10
0
320
640
960
C
0 1 2 3 4 5 6
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
Time (s)
Angle (rad)
Eye position
Target position
D
0 1 2 3 4 5 6
0
320
640
960
Neuron ID
E
0 0.5 1 1.5 2 2.5 3
−0.8
−0.6
−0.4
−0.2
0
0.2
Time (s)
Angle (rad)
Eye position
Target position
F
0 0.5 1 1.5 2 2.5 3
0
320
640
960
Neuron ID
G
FIGURE 14 | Visual tracking with retinal image processing experiment. (A) The visual tracking model embeds a retina model capable of exploiting red–green
opponency as a robot to neuron transfer function, a two-layer brain model that lters color information and a neuron to robot transfer function that uses the ltered
target position information to generate motor commands for the eye. The model is able to correctly detect a moving target as it shown in panels (B,C), where the
target estimated position and corresponding brain activity are presented. When the eye moves, a more noisy retinal input is produced, but the brain model is still
able to lter it and performing step response tasks (D,E) and pursuit of linearly moving targets (F,G).
18
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
robot) in front of a virtual screen. e screen displayed a red
background with a green circle that can be controlled (target).
e overall control scheme can be observed in Figure14A. is
model improves a previously designed visual tracking controller
implemented using the same Brain Model of the experiment
described in Section 5.1 (Vannucci et al., 2015). A model of
retinal red–green opponency was used as a robot to neuron
transfer function. is opponency is a basic mechanism through
which color information is transmitted from the photoreceptors
to the visual cortex (Dacey and Packer, 2003). is model has
two retinal pathways whose outputs are more sensitive to green
objects appearing in receptive elds that were earlier stimulated
by red objects and viceversa. Only one horizontal stripe of the
retinal output, intersecting the target position, is extracted and
19
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
fed into a brain model, via current generator devices. e brain
model consists of 1,280 integrate and re neurons organized in
two layers. e rst layer acts as a current to spike converter for
the retina ganglion cells, while in the second layer, every neuron
gathers information from 7 neurons on the rst layer, acting as a
local spike integrator. us the second layer population encodes
the position of the edges of the target in the horizontal stripes
(corresponding to 320 pixels). Such information, encoded as a
spike count, is then used by the robot to neuron transfer function
in order to nd the centroid of the target. Information about the
target centroid can also be used to generate motor commands
that make the robot perform visual tracking of the moving target.
e accuracy of target detection can be observed in Figure14B,
where the results of a trial where the target was moved with a
sinusoidal motion and the robot eye was kept still are shown.
It can be noticed that the target motion is fully captured by the
model and this is reected in the corresponding brain activity
(Figure14C). Figure14D shows the behavior of the controller
during a step response toward a static target: the eye is able to reach
the target, albeit with some overshooting. Comparing the brain
activity during this task (Figure14E) with the target detection
one (Figure14C), it is noticeable how the retinal output is noisier
during this trial. is is due to the intrinsic motion detection
capabilities of the retina as the activity of ganglion cells increases
when some motion is detected. Nevertheless, the second layer of
neurons in the brain model (lower half) is still able to lter out
activity of the rst layer (upper half) not relative to the target,
thus its position can be computed with more accuracy. Similarly,
during a task where the robot had to follow a target moving lin-
early, the eye motion produces some noise in the retinal output
(Figure14G), but the controller is still able to extract the target
position and successfully perform the task (Figure14F).
6.FUTURE DEVELOPMENTS
e features detailed in the previous sections describe the rst
release of the Neurorobotic Platform. e development of the
platform will continue, in order to provide even more simulation
capabilities and features to the end user.
Short-term development plans include integration with the
Brain Simulation Platform and the Neuromorphic Computing
Platform, as described in Section 3, as well as an extension of the
CLE that will be able to orchestrate distributed brain simulations,
giving it the potential to simulate larger brain models in shorter
times, that will lead to the integration with the High Performance
and Data Analytics Platform.
e State machines manager will be extended in order to
respond also to event produced by the robot behavior, such as the
robot entering a certain area of the environment or performing
an action, allowing the user to design more complex experiments.
e user will also be able to design the experiment workow
using a graphical support included in the ESV GUI, with a
timeline-based view that allows users to directly select objects
and properties in the 3D environment and create events based on
their state in the world simulation. Moreover, we plan to support
fully automated repetitions of experiments including success
evaluation for each trial.
Finally, the users will be able to upload environment built
oine from custom physicals models within the platform, greatly
enhancing the environment building capabilities. At the same
time, the Robot Designer will be extended to include support
of external debuggers, static type checking, and code analysis. It
is also planned to separate the core framework from the Robot
Designer and release it as an independent project to facilitate
plug-in development in Blender in general.
7.CONCLUSION
is paper presented the rst release of the HBP Neurorobotics
Platform, developed within the EU Flagship Human Brain
Project. e NRP provides scientists for the rst time with an
integrated toolchain for insilico experimentation in neurorobot-
ics, that is, to simulate robots with neuro-controllers in complex
environments. In particular, the NRP allows researchers to design
simulated robot bodies, connect these bodies to brain models,
embed the bodies in rich simulated environments, and calibrate
the brain models to match the specic characteristics of the
robots sensors and actuators. e resulting setups can permit to
replicate classical animal and human experiments in silico and
ultimately to perform experiments that would not be possible in
a laboratory environment. e web-based user interface allows
to avoid soware installation and the integration within the
HBP collaboratory portal gives access to storage and computing
resources of the HBP. Users can run experiments alone or in team,
and this can foster collaborative research allowing the sharing of
models and experiments.
In order to demonstrate the functionalities of the platform, we
performed three experiments, a Braitenberg task implemented on
a mobile robot, a sensory-motor task based on a robotic control-
ler, and a visual tracking embedding a retina model implemented
on the iCub humanoid robot. ese use cases make it possible to
assess the applicability of the NRP in robotic tasks as well as in
neuroscientic experiments.
e nal goal of the NRP is to couple robots to detailed mod-
els of the brain, which will be developed in the HBP framework.
It will be possible for robotics and neuroscience researchers to
test state of the art brain models in their research. At the cur-
rent stage, the results achieved with the NRP demonstrate that
it is possible to connect simulations of simple spiking neural
networks with simulated robots. Future work will focus on the
integration of the mentioned neural models. In addition to this,
the integration of high-performance computing clusters and
neuromorphic hardware will also be pursued in order to improve
execution time of spiking neural networks replicating detailed
brain models. All informations relative to the NRP, including
how to access it and where to nd the code, are available on the
plaform website: http://neurorobotics.net.
AUTHOR CONTRIBUTIONS
All authors listed have made substantial, direct, and intellectual
contribution to the work; they have also approved it for publica-
tion. In particular, EF, LV, AlAm, UA, SU, CL, AK, and M-OG
contributed to the design of this work; EF, LV, AlAm, UA, SU,
20
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
JT, GH, JK, IP, OD, NC, and M-OG contributed to the writing
of the manuscript; ER and PM-C designed the retina model,
implemented it in the COREM framework, and collaborated in
integrating it into the NRP, together with LV, AlAm, and JK; GK,
FR, PS, RD, PL, CL, AK, and M-OG contributed to the concep-
tion and design of the NRP; and EF, LV, AlAm, UA, SU, JT, GH,
JK, IP, PM, MH, AR, DaPl, SD, SW, OD, NC, MK, AR, AxvoAr,
LG, and DaPe developed the NRP.
ACKNOWLEDGMENTS
e research leading to these results has received funding from
the European Union Seventh Framework Programme (FP7/2007-
2013) under grant agreement no. 604102 (Human Brain Project)
and from the European Unions Horizon 2020 Research and
Innovation Programme under Grant Agreement No. 720270
(HBP SGA1).
REFERENCES
Allard, J., Cotin, S., Faure, F., Bensoussan, P.-J., Poyer, F., Duriez, C., etal. (2007).
“Sofa – an open source framework for medical simulation,” in MMVR
15-Medicine Meets Virtual Reality, Vol. 125 (Amsterdam, NL: IOP Press), 13–18.
Ambrosano, A., Vannucci, L., Albanese, U., Kirtay, M., Falotico, E., Martínez-
Cañada, P., etal. (2016). “Retina color-opponency based pursuit implemented
through spiking neural networks in the neurorobotics platform,” in 5th
International Conference, Living Machines 2016, Edinburgh, UK, July 19–22,
Vol. 9793, 16–27.
Beneel, A. C., and Greenough, W. T. (1998). Eects of experience and environ-
ment on the developing and mature brain: implications for laboratory animal
housing. ILAR J. 39, 5–11. doi:10.1093/ilar.39.1.5
Bohren, J., and Cousins, S. (2010). e SMACH high-level executive [ros news].
Robot. Autom. Mag. IEEE 17, 18–20. doi:10.1109/MRA.2010.938836
Braitenberg, V. (1986). Vehicles: Experiments in Synthetic Psychology. Cambridge,
MA: MIT Press.
Briones, T. L., Klintsova, A. Y., and Greenough, W. T. (2004). Stability of synaptic
plasticity in the adult rat visual cortex induced by complex environment expo-
sure. Brain Res. 1018, 130–135. doi:10.1016/j.brainres.2004.06.001
Cofer, D., Cymbalyuk, G., Heitler, W. J., and Edwards, D. H. (2010a). Control of
tumbling during the locust jump. J. Exp. Biol. 213, 3378–3387. doi:10.1242/
jeb.046367
Cofer, D., Cymbalyuk, G., Reid, J., Zhu, Y., Heitler, W. J., and Edwards, D. H.
(2010b). AnimatLab: a 3D graphics environment for neuromechanical simu-
lations. J. Neurosci. Methods 187, 280–288. doi:10.1016/j.jneumeth.2010.01.005
Coumans, E. (2013). Bullet Physics Library. Available at: www.bulletphysics.org
Dacey, D. M., and Packer, O. S. (2003). Colour coding in the primate retina: diverse
cell types and cone-specic circuitry. Curr. Opin. Neurobiol. 13, 421–427.
doi:10.1016/S0959-4388(03)00103-X
Davison, A. P., Brderle, D., Eppler, J. M., Kremkow, J., Muller, E., Pecevski, D. A.,
et al. (2008). PyNN: a common interface for neuronal network simulators.
Front. Neuroinform. 2:11. doi:10.3389/neuro.11.011.2008
Denavit, J. (1955). A kinematic notation for lower-pair mechanisms based on
matrices. Trans. ASME J. Appl. Mech. 22, 215–221.
Denoyelle, N., Pouget, F., Viéville, T., and Alexandre, F. (2014). “Virtualenaction:
a platform for systemic neuroscience simulation,” in International Congress on
Neurotechnology, Electronics and Informatics, Setúbal, PT.
Djurfeldt, M., Hjorth, J., Eppler, J. M., Dudani, N., Helias, M., Potjans, T. C.,
etal. (2010). Run-time interoperability between neuronal network simulators
based on the MUSIC framework. Neuroinformatics 8, 43–60. doi:10.1007/
s12021-010-9064-z
Drumwright, E. (2010). “Extending open dynamics engine for robotics simulation,
in Simulation, Modeling, and Programming for Autonomous Robots, Volume
6472 of Lecture Notes in Computer Science (Berlin, DE: Springer), 38–50.
Gamez, D., Fidjeland, A. K., and Lazdins, E. (2012). iSpike: a spiking neural
interface for the iCub robot. Bioinspir. Biomim. 7, 025008. doi:10.1088/
1748-3182/7/2/025008
Gamez, D., Newcombe, R., Holland, O., and Knight, R. (2006). “Two simulation
tools for biologically inspired virtual robotics,” in Proceedings of the IEEE 5th
Chapter Conference on Advances in Cybernetic Systems (Bristol: IOP Publishing
Ltd), 85–90.
Gamma, E., Helm, R., Johnson, R., and Vlissides, J. (1995). Design Patterns:
Elements of Reusable Object-oriented Soware. Boston, MA: Addison-Wesley
Longman Publishing Co., Inc.
Gewaltig, M.-O., and Diesmann, M. (2007). NEST (neural simulation tool).
Scholarpedia 2, 1430. doi:10.4249/scholarpedia.1430
Goodman, D., and Brette, R. (2008). Brian: a simulator for spiking neural networks
in python. Front. Neuroinform. 2:5. doi:10.3389/neuro.11.005.2008
Hinkel, G., Groenda, H., Krach, S., Vannucci, L., Denninger, O., Cauli, N.,
et al. (2017). A framework for coupled simulations of robots and spiking
neuronal networks. J. Intell. Robot. Syst. 85, 71–91. doi:10.1007/s10846-016-
0412-6
Hinkel, G., Groenda, H., Vannucci, L., Denninger, O., Cauli, N., and Ulbrich,
S. (2015). “A domain-specic language (DSL) for integrating neuronal net-
works in robot control,” in ACM International Conference Proceeding Series
(NewYork,NY: ACM), 9–15.
IEEE. (1998). “IEEE recommended practice for soware requirements specica-
tions,” in IEEE Std 830-1998 (Washington, DC: IEEE), 1–40.
Issa, F. A., Drummond, J., Cattaert, D., and Edwards, D. H. (2012). Neural circuit
reconguration by social status. J. Neurosci. 32, 5638–5645. doi:10.1523/
JNEUROSCI.5668-11.2012
Khan, M. M., Lester, D. R., Plana, L. A., Rast, A., Jin, X., Painkras, E., etal. (2008).
“SpiNNaker: mapping neural networks onto a massively-parallel chip multi-
processor,” in Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on
Computational Intelligence). IEEE International Joint Conference (Washington,
DC: IEEE), 2849–2856.
Koenig, N., and Howard, A. (2004). “Design and use paradigms for gazebo, an
open-source multi-robot simulator,” in Intelligent Robots and Systems, 2004.
(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference, Vol. 3
(IEEE), 2149–2154.
Kunkel, S., Schmidt, M., Eppler, J. M., Plesser, H. E., Masumoto, G., Igarashi, J.,
etal. (2014). Spiking network simulation code for petascale computers. Front.
Neuroinformatics 8:78. doi:10.3389/fninf.2014.00078
León, B., Ulbrich, S., Diankov, R., Puche, G., Przybylski, M., Morales, A., etal.
(2010). “Opengrasp: a toolkit for robot grasping simulation,” in Simulation,
Modeling, and Programming for Autonomous Robots (Berlin, DE: Springer),
109–120.
Martínez-Cañada, P., Morillas, C., Nieves, J. L., Pino, B., and Pelayo, F. (2015). “First
stage of a human visual system simulator: the retina,” in Computational Color
Imaging (Singapore, SG: Springer), 118–127.
Martínez-Cañada, P., Morillas, C., Pino, B., Ros, E., and Pelayo, F. (2016).
A computational framework for realistic retina modeling. Int. J. Neural Syst.
26, 1650030. doi:10.1142/S0129065716500301
Metta, G., Fitzpatrick, P., and Natale, L. (2006). YARP: yet another robot platform.
Int. J. Adv. Robot. Syst. 3, 043–048.
Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., etal. (2009). “ROS:
an open-source robot operating system,” in ICRA Workshop on Open Source
Soware (Washington, DC: IEEE), 5.
Roennau, A., Heppner, G., Nowicki, M., and Dillmann, R. (2014). “LAURON V: a
versatile six-legged walking robot with advanced maneuverability,” in Advanced
Intelligent Mechatronics (AIM), 2014 IEEE/ASME International Conference
(Washington, DC: IEEE), 82–87.
Ros, E., Ortigosa, E. M., Carrillo, R., and Arnold, M. (2006). Real-time comput-
ing platform for spiking neurons (RT-spike). IEEE Trans. Neural Netw. 17,
1050–1063. doi:10.1109/TNN.2006.875980
Schwaber, K., and Beedle, M. (2002). Agile Soware Development with Scrum.
London, GB: Pearson.
Terlemez, O., Ulbrich, S., Mandery, C., Do, M., Vahrenkamp, N., and Asfour, T. (2014).
“Master motor map (mmm)framework and toolkit for capturing, representing,
21
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
and reproducing human motion on humanoid robots,” in Humanoid Robots
(Humanoids), 2014 14th IEEE-RAS International Conference (Washington, DC:
IEEE), 894–901.
Ulbrich, S., Ruiz de Angulo, V., Asfour, T., Torras, C., and Dillmann, R. (2012).
Kinematic bezier maps. IEEE Trans. Syst. Man Cybern. B Cybern. 42, 1215–1230.
doi:10.1109/TSMCB.2012.2188507
Vannucci, L., Ambrosano, A., Cauli, N., Albanese, U., Falotico, E., Ulbrich, S., etal.
(2015). “A visual tracking model implemented on the iCub robot as a use case
for a novel neurorobotic toolkit integrating brain and physics simulation,” in
IEEE-RAS International Conference on Humanoid Robots (Washington, DC:
IEEE Computer Society), 1179–1184.
Voegtlin, T. (2011). Clones: a closed-loop simulation framework for body,
muscles and neurons. BMC Neurosci. 12:1–1. doi:10.1186/1471-2202-12-
S1-P363
Weidel, P., Djurfeldt, M., Duarte, R. C., and Morrison, A. (2016). Closed
loop interactions between spiking neural network and robotic simulators
based on MUSIC and ROS. Front. Neuroinformatics 10:31. doi:10.3389/
fninf.2016.00031
Weidel, P., Duarte, R., Korvasová, K., Jitsev, J., and Morrison, A. (2015). ROS-
MUSIC toolchain for spiking neural network simulations in a robotic environ-
ment. BMC Neurosci. 16:1. doi:10.1186/1471-2202-16-S1-P169
Conict of Interest Statement:e authors declare that the research was con-
ducted in the absence of any commercial or nancial relationships that could be
construed as a potential conict of interest.
Copyright © 2017 Falotico, Vannucci, Ambrosano, Albanese, Ulbrich, Vasquez
Tieck, Hinkel, Kaiser, Peric, Denninger, Cauli, Kirtay, Roennau, Klinker, Von Arnim,
Guyot, Peppicelli, Martínez-Cañada, Ros, Maier, Weber, Huber, Plecher, Röhrbein,
Deser, Roitberg, van der Smagt, Dillman, Levi, Laschi, Knoll and Gewaltig. is is an
open-access article distributed under the terms of the Creative Commons Attribution
License (CC BY). e use, distribution or reproduction in other forums is permit-
ted, provided the original author(s) or licensor are credited and that the original
publication in this journal is cited, in accordance with accepted academic practice.
No use, distribution or reproduction is permitted which does not comply with these
terms.
22
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
APPENDIX
A.Functional Requirements
As stated in Section 2.1, the itemized functional requirements
should be interpreted as implemented platform features. All the
listed requirements are to be satised by the NRP as a whole. For
the sake of clarity, the requirements are grouped by topic.
A.1.Design and Editing
A.1.1.Robot
• Assemble a virtual robot
shall enable the user to load a ready-made robot from a
graphical library
shall enable the user to assemble a robot using ready-made
robot parts from a graphical library
shall provide standard sensors and actuators to the ready-
made or block-assembled robot
• Dene a kinematic chain
shall enable the user to dene a kinematic chain by building
a tree-like structure of the single links
shall enable the user to group kinematic chains
• Robot editing
shall enable the user to select parts of the robot
shall enable the user to edit a robot parts graphical attributes
or physical attribute
• Save/load designed robot
shall enable the user to save a model of a virtual robot
shall provide well-dened le formats (e.g., XML, URDF,
and SDF)
A.1.2.Environment
• Assemble and model virtual environment
shall enable the user to instantiate any number of objects
shall enable the user to remove objects from the
environment
shall enable the user to interactively change objects poses
and orientations
shall provide a GUI to view the object parameters
shall provide a GUI to edit the object parameters
shall enable the user to select (e.g., drag and drop) objects
from a local library of available objects
• Load/save virtual environment
shall provide a custom environment which can be loaded in
a new experiment
shall provide a custom environment which can be loaded in
an existing experiment
shall enable the user to save the environment status at any
moment during the simulation/editing
shall provide the user to store the environment in a well-de-
ned le format
A.1.3.Brain
• Create Brain Model
shall support binary format for data-driven brain model
representation
• Save/load and edit brain models
shall enable the user to save brain models
shall enable the user to reload brain models
shall provide a visual interface to edit brain models
A.1.4.Brain–Body Interface
• Transfer Modules
shall provide input or output variables that are produced
or consumed by the brain simulation (currents, spikes, and
spike rates)
shall provide input or output variables that are produced or
consumed by the sensor or the actuators of the robot
shall provide data from both simulators that can be con-
sumed by monitoring or debugging interfaces
shall produce suitable output for experiment data gathering,
saving it to a common le format
shall handle intensive computations such as the simulation
of a spinal cord or retina model
shall hold a state, dened as a set of variables that keeps
some values in between the loops
• Transfer modules editing
shall enable the user to save and reload transfer modules
shall enable the user to edit transfer modules through a
visual interface
shall enable the user to select populations of neurons
shall enable the user to label populations of neurons
shall enable the user to connect groups of neurons to a
transfer module and viceversa
shall enable the user to connect sensors and actuators to a
transfer module and viceversa
A.1.5.Experiment
• Conguring and loading/saving an experiment setup
shall enable the user to select the virtual environment
shall enable the user to select the Neurobot to use in the
experiment
shall provide user to load/save a denition of an experiment
setup
• Dening action sequences
shall enable the user to dene events (e.g., change of light
intensity and moving of an object) occurring at a certain
point in time
shall enable the user to dene the properties of an event
(e.g., duration)
shall enable the user to specify complex events (by combin-
ing single events)
shall enable developers to dene more complex actions a
scripting-interface
A.2.Simulation
A.2.1.Simulation Consistency and Synchronization
Mechanisms
• Simulation control
shall support start, stop and pause of an experiment at any
time
23
Falotico et al. The Neurorobotics Platform
Frontiers in Neurorobotics | www.frontiersin.org January 2017 | Volume 11 | Article 2
the simulation framework shall be capable of injecting
actions into the simulation of the virtual world as dened
by the experimenter
A.2.2.Interactive Visualization
• Simulation control/editing
shall enable the user to control the simulation through the
GUI
shall enable the user to live edit the experiment congura-
tion using the GUI
• Simulation monitoring
shall display a 3D rendering of the world scene
shall enable the user to navigate the 3D scene
shall display measurements from the two simulations
shall enable multiple users to view the same running simu-
lation at the same time
shall reset every component of the simulation at any time
shall be capable of exposing its internal simulation execu-
tion speed
shall be presented to the user in a visual interface
shall provide an interface to modify any variable parameters
to control the details of the simulation
shall be completely reproducible
• Physics simulation
shall maintain a consistent model of the world and the robot
during the execution of an experiment
shall maintain a world clock to update the world state in
suitable time-slices
• Synchronization
the loop between the brain simulator and the WSE shall
operate at a rate of an order of magnitude of 0.1 ms of
simulated time
the world clock shall be synchronized to the brain simulat-
ion clock to achieve a consistent, overarching notion of time
... However, simulation platforms allow academic researchers to design and study neural structures. The Human Brain Project [5] is an organization of researchers who have created a neurorobotics platform [6] that connects a simulated brain to a simulated or physical body, allowing researchers to explore how to control movement, stimuli reaction, and learning from a virtual or natural environment. Another platform primarily focused on SNN implementation is called Nengo [7][8][9], a tool that allows to build and design SNNs architectures. ...
... PA + AP T − PBR −1 B T P + Q = 0 (6) In order to solve equation (6) there are several software implemented methods [20,21], which start from a known A, B for a ẍ= f(q, q̇, u) system dynamics. ...
Article
Full-text available
Adaptability, learning capabilities, and space-energy efficient hardware are required in robotic architectures, which must deal with changing dynamic environments. Nowadays, learning algorithms are implemented in Von Neumann Architectures, which separate storage from processing units, making them not appropriate for artificial neural networks (ANN), resulting in inefficient implementations. This writing presents a neural architecture proposal designed to implement a control loop in a mobile wheeled under-actuated inverted pendulum system, using spiking neural networks, linear quadratic regulator control technique, and a neural framework that allows us to define the neuron ensembles specification to represent specific control signals. The intention is to study how typical control theory algorithms can be translated into neural structures, aiming for neuromorphic implementation.
... Currently, there is an ambitious project that aims to create a fullfledged artificial mouse brain [5]. Moreover, the second group includes such tasks as: using motor biorhythms for neural control in robotics [6][7][8], controlling human movements (bioprosthetics, functional restoration of mobility) [9], understanding learning processes and memory effects [10][11][12], creating brain-computer interfaces [13][14][15][16][17], etc. ...
Preprint
Full-text available
The extensive development of the field of spiking neural networks has led to many areas of research that have a direct impact on people's lives. As the most bio-similar of all neural networks, spiking neural networks not only allow the solution of recognition and clustering problems (including dynamics), but also contribute to the growing knowledge of the human nervous system. Our analysis has shown that the hardware implementation is of great importance, since the specifics of the physical processes in the network cells affect their ability to simulate the neural activity of living neural tissue, the efficiency of certain stages of information processing, storage and transmission. This survey reviews existing hardware neuromorphic implementations of bio-inspired spiking networks in the "semiconductor", "superconductor" and "optical" domains. Special attention is given to the possibility of effective "hybrids" of different approaches
... Neurorobotics -Kawato and Gomi 1992; Giszter et al. 2001;Dario et al. 2005;Mergner and Tahboub 2009). The spectrum reaches from focusing on morphological and material aspects to attempts to simulate brain functions using spiking neurons (e. g. Falotico et al. 2017). And, behind such ideas is often the 'embodiment' concept addressed in this chapter. ...
Chapter
The concept of embodiment (i.e. of a bodily basis) of human cognition has a long history in philosophy and nowadays develops a growing impact on humanoid robotics. There, inspirations often aim at mimicking human skills and achieving friendly and fruitful human-robot interaction (HRI). This article considers as an example the embedding of the sense of self-agency (SSA) in very basic human sensorimotor control mechanisms and, after its modeling, its ‘re-embodiment’ into human-inspired robots. From this it is concluded that (1) robotics profits with respect to robot safety, control robustness, versatility, energy costs, and HRI from implementing human sensorimotor control concepts, and (2) vice versa, concepts of human cognitive and physiological functions may profit when confronted with ‘real world’ tests in robots.
... Refactoring code for real-time constraints optimizes execution pathways, minimizes delays, and ensures determinism. Latency reduction and interrupt handling optimization help fulfill real-time performance needs (Falotico et al., 2017). ...
Article
Full-text available
Code refactoring solutions for robotics software maintenance and optimization are examined in this paper. The critical goal is finding refactoring methods that increase code maintainability, performance, and real-time restrictions in robotics applications. Using secondary data, the research synthesizes the literature on robotics software restructuring, performance improvement, and maintenance difficulties. Research shows modular design, readability enhancements, and algorithmic changes increase program maintainability and performance. More explicit code, better debugging, and enhanced real-time performance are advantages. The report admits constraints, including longer development times and more significant bug risks. According to policy, structured refactoring, automated testing, and industry standards may reduce risks and improve maintenance. By combining these tactics, developers may keep robotics systems resilient, adaptive, and ready for new technology.
... Complaints are registered by many visitors, that the expectations cannot be met in the case of robotic services (Falotico et al., 2017). The visits review and complaints must be considered by providing a great importance. ...
Article
Full-text available
With the development of technology, the development of robot and robotic technologies has begun to be used in the hospitality industry. In light of these developments, the study was conducted on first robot-based restaurant in India, the Robo chef restaurant as a typical case sample and analysed 328 visitor reviews about the restaurant on Indian Food Freak website and OpenTable web sites. As a result of these analysis, three themes (robots and robotics performance, visitor emotions and experiences, and price performance) were determined by using thematic analysis method and making use of the literature. It aims to contribute to the discussions about the visitor comments and complaints about the use of robot and robotic technologies in the hospitality industry and the use of these technologies in hospitality. Analysis of the emotions and experiences of visitors stated that robots and robotics were met with fun, strange and interesting, but at the same time, visitors were disappointed with the performance of robots and robotics, and robots and robotics provided services below their expectations. As a result of the analyses made on the price performance of the Robo chef robot restaurant, it was not clear whether the restaurant prices were appropriate or not.
... The difference in power consumption between the human brain and current technology is striking when one realizes that a clock-based computer operating a "human-scale" brain simulation in theory would need about 12 gigawatts, but the human brain only uses 20 Watts [132]. The artificial discretization of time imposed by mainstream processing and sensor architectures [133], which depend on arbitrary internal clocks, is a major barrier to the upscaling of intelligent interactive agents. ...
Preprint
Full-text available
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations. Event cameras inspired by biological vision systems capture the scenes asynchronously consuming minimal power but with higher temporal resolution. Neuromorphic processors, which are designed to mimic the parallel processing capabilities of the human brain, offer efficient computation for real-time data processing of event-based data streams. This paper provides a comprehensive overview of recent research efforts in integrating event cameras and neuromorphic processors into VSLAM systems. It discusses the principles behind event cameras and neuromorphic processors, highlighting their advantages over traditional sensing and processing methods. Furthermore, an in-depth survey was conducted on state-of-the-art approaches in event-based SLAM, including feature extraction, motion estimation, and map reconstruction techniques. Additionally, the integration of event cameras with neuromorphic processors, focusing on their synergistic benefits in terms of energy efficiency, robustness, and real-time performance was explored. The paper also discusses the challenges and open research questions in this emerging field, such as sensor calibration, data fusion, and algorithmic development. Finally, the potential applications and future directions for event-based SLAM systems are outlined, ranging from robotics and autonomous vehicles to augmented reality.
Article
Neuromorphic computing mimics computational principles of the brain in silico and motivates research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) exclusively capture local intensity changes and offer superior power consumption, response latencies, and dynamic ranges. SNNs replicate biological neuronal dynamics and have demonstrated potential as alternatives to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Nevertheless, these novel paradigms remain scarcely explored outside the domain of aerial robots. To investigate the utility of brain-inspired sensing and data processing, we developed a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans using a dynamic motion primitive. We conducted experiments with a Kinova Gen3 arm performing simple reaching tasks that involve obstacles in sets of distinct task scenarios and in comparison to a non-adaptive baseline. Our neuromorphic approach facilitated reliable avoidance of imminent collisions in simulated and real-world experiments, where the baseline consistently failed. Trajectory adaptations had low impacts on safety and predictability criteria. Among the notable SNN properties were the correlation of computations with the magnitude of perceived motions and a robustness to different event emulation methods. Tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation. Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
Article
Brain-inspired computing (BIC) is an emerging research field that aims to build fundamental theories, models, hardware architectures, and application systems toward more general artificial intelligence (AI) by learning from the information processing mechanisms or structures/functions of biological nervous systems. It is regarded as one of the most promising research directions for future intelligent computing in the post-Moore era. In the past few years, various new schemes in this field have sprung up to explore more general AI. These works are quite divergent in the aspects of modeling/algorithm, software tool, hardware platform, and benchmark data since BIC is an interdisciplinary field that consists of many different domains, including computational neuroscience, AI, computer science, statistical physics, material science, and microelectronics. This situation greatly impedes researchers from obtaining a clear picture and getting started in the right way. Hence, there is an urgent requirement to do a comprehensive survey in this field to help correctly recognize and analyze such bewildering methodologies. What are the key issues to enhance the development of BIC? What roles do the current mainstream technologies play in the general framework of BIC? Which techniques are truly useful in real-world applications? These questions largely remain open. To address the above issues, in this survey, we first clarify the biggest challenge of BIC: how can AI models benefit from the recent advancements in computational neuroscience? With this challenge in mind, we will focus on discussing the concept of BIC and summarize four components of BIC infrastructure development: 1) modeling/algorithm; 2) hardware platform; 3) software tool; and 4) benchmark data. For each component, we will summarize its recent progress, main challenges to resolve, and future trends. Based on these studies, we present a general framework for the real-world applications of BIC systems, which is promising to benefit both AI and brain science. Finally, we claim that it is extremely important to build a research ecology to promote prosperity continuously in this field.
Article
Full-text available
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations. Event cameras inspired by biological vision systems capture the scenes asynchronously, consuming minimal power but with higher temporal resolution. Neuromorphic processors, which are designed to mimic the parallel processing capabilities of the human brain, offer efficient computation for real-time data processing of event-based data streams. This paper provides a comprehensive overview of recent research efforts in integrating event cameras and neuromorphic processors into VSLAM systems. It discusses the principles behind event cameras and neuromorphic processors, highlighting their advantages over traditional sensing and processing methods. Furthermore, an in-depth survey was conducted on state-of-the-art approaches in event-based SLAM, including feature extraction, motion estimation, and map reconstruction techniques. Additionally, the integration of event cameras with neuromorphic processors, focusing on their synergistic benefits in terms of energy efficiency, robustness, and real-time performance, was explored. The paper also discusses the challenges and open research questions in this emerging field, such as sensor calibration, data fusion, and algorithmic development. Finally, the potential applications and future directions for event-based SLAM systems are outlined, ranging from robotics and autonomous vehicles to augmented reality.
Article
Full-text available
Bio-inspired robots still rely on classic robot control although advances in neurophysiology allow adaptation to control as well. However, the connection of a robot to spiking neuronal networks needs adjustments for each purpose and requires frequent adaptation during an iterative development. Existing approaches cannot bridge the gap between robotics and neuroscience or do not account for frequent adaptations. The contribution of this paper is an architecture and domain-specific language (DSL) for connecting robots to spiking neuronal networks for iterative testing in simulations, allowing neuroscientists to abstract from implementation details. The framework is implemented in a web-based platform. We validate the applicability of our approach with a case study based on image processing for controlling a four-wheeled robot in an experiment setting inspired by Braitenberg vehicles.
Conference Paper
Full-text available
We propose a configurable simulation platform that reproduces the analog neural behavior of different models of the Human Visual System at the early stages. Our software can simulate efficiently many of the biological mechanisms found in retina cells, such as chromatic opponency in the red-green and blue-yellow pathways, signal gathering through chemical synapses and gap junctions or variations in the neuron density and the receptive field size with eccentricity. Based on an image-processing approach, simulated neurons can perform spatiotemporal and color processing of the input visual stimuli generating the visual maps of every intermediate stage, which correspond to membrane potentials and synaptic currents. An interface with neural network simulators has been implemented, which allows to reproduce the spiking output of some specific cells, such as ganglion cells, and integrate the platform with models of higher brain areas. Simulations of different retina models related to the color opponent mechanisms, obtained from electro-physiological experiments, show the capability of the platform to reproduce their neural response.
Article
Full-text available
In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning.
Conference Paper
Full-text available
Studying a functional, biologically plausible neural network that performs a particular task is highly relevant for progress in both neuroscience and machine learning. Most tasks used to test the function of a simulated neural network are still very artificial and thus too narrow, providing only little insight into the true value of a particular neural network architecture under study. For example, many models of reinforcement learning in the brain rely on a discrete set of environmental states and actions [1]. In order to move closer towards more realistic models, modeling studies have to be conducted in more realistic environments that provide complex sensory input about the states. A way to achieve this is to provide an interface between a robotic and a neural network simulation, such that a neural network controller gains access to a realistic agent which is acting in a complex environment that can be flexibly designed by the experimentalist. To create such an interface, we present a toolchain, consisting of already existing and robust tools, which forms the missing link between robotic and neuroscience with the goal of connecting robotic simulators with neural simulators. This toolchain is a generic solution and is able to combine various robotic simulators with various neural simulators by connecting the Robot Operating System (ROS) [2] with the Multi-Simulation Coordinator (MUSIC) [3]. ROS is the most widely used middleware in the robotic community with interfaces for robotic simulators like Gazebo, Morse, Webots, etc, and additionally allows the users to specify their own robot and sensors in great detail with the Unified Robot Description Language (URDF). MUSIC is a communicator between the major, state-of-the-art neural simulators: NEST, Moose and NEURON. By implementing an interface between ROS and MUSIC, our toolchain is combining two powerful middlewares, and is therefore a multi-purpose generic solution. One main purpose is the translation from continuous sensory data, obtained from the sensors of a virtual robot, to spiking data which is passed to a neural simulator of choice. The translation from continuous data to spiking data is performed using the Neural Engineering Framework (NEF) proposed by Eliasmith & Anderson [4]. By sending motor commands from the neural simulator back to the robotic simulator, the interface is forming a closed loop between the virtual robot and its spiking neural network controller. To demonstrate the functionality of the toolchain and the interplay between all its different components, we implemented one of the vehicles described by Braitenberg [5] using the robotic simulator Gazebo and the neural simulator NEST. In future work, we aim to create a testbench, consisting of various environments for reinforcement learning algorithms, to provide a validation tool for the functionality of biological motivated models of learning.
Article
A symbolic notation devised by Reuleaux to describe mechanisms did not recognize the necessary number of variables needed for complete description. A reconsideration of the problem leads to a symbolic notation which permits the complete description of the kinematic properties of all lower-pair mechanisms by means of equations. The symbolic notation also yields a method for studying lower-pair mechanisms by means of matrix algebra; two examples of application to space mechanisms are given.
Conference Paper
The ‘red-green’ pathway of the retina is classically recognized as one of the retinal mechanisms allowing humans to gather color information from light, by combining information from L-cones and M-cones in an opponent way. The precise retinal circuitry that allows the opponency process to occur is still uncertain, but it is known that signals from L-cones and M-cones, having a widely overlapping spectral response, contribute with opposite signs. In this paper, we simulate the red-green opponency process using a retina model based on linear-nonlinear analysis to characterize context adaptation and exploiting an image-processing approach to simulate the neural responses in order to track a moving target. Moreover, we integrate this model within a visual pursuit controller implemented as a spiking neural network to guide eye movements in a humanoid robot. Tests conducted in the Neurorobotics Platform confirm the effectiveness of the whole model. This work is the first step towards a bio-inspired smooth pursuit model embedding a retina model using spiking neural networks.
Article
Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.