Conference PaperPDF Available

Augmented reality for robots: Virtual sensing technology applied to a swarm of e-pucks

Authors:

Abstract and Figures

We present a novel technology that allows real robots to perceive an augmented reality environment through virtual sensors. Virtual sensors are a useful and desirable technology for research activities because they allow researchers to quickly and efficiently perform experiments that would otherwise be more expensive, or even impossible. In particular, augmented reality is useful (i) for prototyping and assessing the impact of new sensors before they are physically produced; and (ii) for developing and studying the behaviour of robots that should deal with phenomena that cannot be easily reproduced in a laboratory environment because, for example, they are dangerous (e.g., fire, radiations). We realised an augmented reality system for robots in which a simulator retrieves real-time data on the real environment through a multi-camera tracking system and delivers post-processed information to the robot swarm according to each robot’s sensing range. We illustrate the proposed virtual sensing technology through an experiment involving 15 e-pucks.
Content may be subject to copyright.
Augmented reality for robots:
virtual sensing technology
applied to a swarm of e-pucks
Andreagiovanni Reina, Mattia Salvaro, Gianpiero Francesca,
Lorenzo Garattoni, Carlo Pinciroli, Marco Dorigo, Mauro Birattari
IRIDIA, Universit´
e Libre de Bruxelles, Brussels, Belgium
areina@ulb.ac.be, mbiro@ulb.ac.be
Abstract—We present a novel technology that allows real
robots to perceive an augmented reality environment through
virtual sensors. Virtual sensors are a useful and desirable tech-
nology for research activities because they allow researchers to
quickly and efficiently perform experiments that would otherwise
be more expensive, or even impossible. In particular, augmented
reality is useful (i) for prototyping and assessing the impact of
new sensors before they are physically produced; and (ii) for
developing and studying the behaviour of robots that should
deal with phenomena that cannot be easily reproduced in a
laboratory environment because, for example, they are dangerous
(e.g., fire, radiations). We realised an augmented reality system
for robots in which a simulator retrieves real-time data on the
real environment through a multi-camera tracking system and
delivers post-processed information to the robot swarm according
to each robot’s sensing range. We illustrate the proposed virtual
sensing technology through an experiment involving 15 e-pucks.
I. INTRODUCTION
Swarm robotics [1] is a promising discipline that studies
the coordination of a large number of robots to perform
tasks in several domains. Real-world applications that would
benefit from swarm robotics-based solutions include search
and rescue, demining, nanoparticle medical treatments, space
or underwater construction. However, the complexity and
unpredictability of these applications exceeds the capabilities
of current swarm systems, which are developed in totally
controlled lab conditions. While, on the one hand, robot
swarms promise characteristics such as scalability, robustness,
adaptivity and low-cost, on the other hand, robot swarms
are complex to analyse, model and design because of the
large number of nonlinear interactions among the robots.
Mathematical and statistical tools to describe robot swarms
are still under development, and a theoretical methodology
to forecast the swarm dynamics given the individual robot
behaviour is missing [2]. As a consequence, it is common
practice to resort to empirical studies to assess the performance
of robot swarms.
Experiments may be either in simulation or with physical
hardware. The former are easier to run and less time consum-
ing than the latter. However, when experiments are performed
only in simulation, they may not guarantee that the estimated
performance matches the one measured with real hardware.
MS is also with Universit`
a di Bologna, Italy.
In contrast, experiments with robots demonstrate and confirm
that the investigated system functions on real devices, which
include challenging aspects intrinsic of reality and out of the
designer’s control, such as noise and device failures. However,
experimentation with physical hardware is expensive, both in
terms of money and time. In addition, hardware modifications
are impractical and often impossible to realise when time
and money resources are limited. We believe that a viable
solution to these issues is performing hybrid experiments that
combine real robots with simulation. This work proposes a
novel technology to endow a robot swarm with virtual sensors,
immersing the robots in an augmented reality environment.
We envision two useful applications of augmented reality
for robots: (i) prototyping new sensors to endow the robots
with additional sensing capabilities, and (ii) adding virtual el-
ements in the experimental environment. Concerning point (i),
through the proposed virtual sensing technology, a robot swarm
can be quickly endowed with a new sensor and the new
resulting swarm behaviour can be tested before the sensor is
produced. Compared to the production and installation of a
real sensor, implementing a virtual sensor is much quicker,
cheaper and easier, since it is built entirely in software and
installed by uploading the code on the robot. Therefore, several
tests can be done before the actual production of the sensor.
This prototyping approach is particularly advantageous when
operating with a swarm of robots. In fact, hardware production
and installation for a large number of robots may be expensive
and requires a considerable amount of work. Concerning
point (ii), a researcher can design a virtual environment with
the desired characteristics and allow a robot swarm to perceive
that environment. This use case permits studies in hard-to-
reproduce experimental conditions because of costs or risks
involved. For instance, the proposed technology allows the
simulation of harmful radiations in a nuclear accident site
(or flames diffusion in a fire), and enables the robots to
perceive the simulated radiations (or temperature) through a
virtual sensor. In particular, the proposed technology allows a
researcher to simulate environments that evolve in time with
arbitrarily complex temporal patterns. This allows the study
of systems that are required to adapt to changes in their
operational environment.
To enable virtual sensing, we designed an architecture
composed of three components: a tracking system, a simulator
and a robot swarm. The tracking system provides the simulator
with the location and orientation of each robot in real time.978-1-4673-7501-6/15/$31.00 © 2015 IEEE
On the basis of this information, the simulator computes the
readings of the virtual sensors. The simulator then delivers
the readings into the robots via wireless communication. To
illustrate the system, we performed an experiments in which
a swarm of 15 e-pucks senses and acts in a real environment,
augmented with features of a virtual counterpart.
The rest of the paper is organised as follows. In Section II,
we give an overview of the state of the art in virtual sensing
technology. In Section III, we describe the architecture of the
system. In Section IV, we describe the experiment that we
perform to illustrate the system. In Section V, we discuss the
proposed technology and we suggest possible future work.
II. RE LATE D WO RK
A number of works devoted attention to the subject of
virtual sensing technology. O’Dowd et al. [3] and Bjerknes
et al. [4] implemented a specific virtual sensor to perform
robot localisation. In both cases, the authors adopt a tracking
or positioning system to import the robots’ position in a
simulator. O’Dowd et al. [3] used a tracking system and WiFi
communication to supply the robots with their position. Thus,
virtual sensor technology has been implemented only as a
specific virtual GPS sensor. Bjerknes et al. [4] developed a
2D and 3D positioning system based on ultrasonic beacons.
Through triangulation, the robots can calculate their position
autonomously. Bjerknes et al. [4] achieved decentralised and
scalable virtual sensing employing an embedded simulator
running on each robot. This solution would not be viable
for robot swarms, due to the limitations of the hardware
in terms of memory and computation power. In this work,
virtual sensors are not transparent to the control software
that has to allocate power and time resources for the virtual
sensor computation. Furthermore, the ultrasonic positioning
system requires hardware modification, in particular ultrasonic
sensors are installed on each robot. In contrast to these two
systems, we propose a general architecture that enables the
implementation of any kind of virtual sensor, requiring only
WiFi communication hardware on the robots involved. While,
in the literature, the virtual sensors are ad-hoc implementations
to be used only in a specific experimental setup, here we
propose a general-purpose technology for virtual sensing.
Although this paper does not deal with virtual actuation,
we report here the main works on the topic as it is relevant
in the context of augmented reality. In the literature, there
are a few works that propose an interaction between robots
and virtual environment based on virtual actuators. Among
those who implemented virtual actuation, Sugawara et al. [5]
and Garnier et al. [6] developed a system in which robot
deposit virtual pheromone. Once deposited, the pheromone is
visualised using coloured light projections on the floor. Both
works, employed a tracking system, a projector, and additional
light sensors installed on the top of the robots. Thus, the
proposed approaches require both custom hardware on the
robots and a smart environment based on a light projector. The
approaches require controlled light conditions: in particular,
ambient light needs to be reduced to a minimum to allow the
robots perceive the light emitted by the projector. Khaliq et
al. [7] implemented virtual pheromone using a grid of radio
frequency identification tags (RFID) embedded in the floor,
each tag being an hexagonal portion of the grid. The robots
TABLE I: List of components and respective acronym.
Acronym Component
ATS Arena tracking system
ATS-PE Arena tracking system physics engine
ATS-S Arena tracking system server
ATS-C Arena tracking system client
VS-S Virtual sensor server
VS-C Virtual sensor client
VS-SM Virtual sensor simulation module
VS-RRM Virtual sensor real robot module
are equipped with a RFID transceiver and are able to write
and read the value of the tag they are standing on. Khaliq
et al. [7] achieved a scalable system because the information
is externalised and spatialized throughout the RFID tag grid:
the robots only elaborate local information and there is no
need of central control and computation. However, a smart
environment and specific hardware on the robots must be
provided for the realisation of only one specific virtual actuator.
III. ARCHITECTURE
In this section, we detail the components of the system
and their communication protocols, following the logical flow
from the source of the information (tracking system) to the
final “user” of the virtual sensors (the robots). Figure 1 shows
the information flow between the components, while Table I
lists the components of the virtual sensing architecture and
provides their respective acronym.
Similarly to the works mentioned in Section II, our system
is based on a tracking system that computes the robots’
positions, i.e., their locations and orientations, in real time.
The tracking system streams the robots’ data to a simulator
which, in turn, computes the readings of the virtual sensors.
Next, the simulator sends the computed readings to the robots,
according to their sensing ranges. Finally, the robots access the
readings to choose the next actions to perform.
A. Arena tracking system
The developed virtual sensing architecture can work with
any tracking system that supplies real-time location and orien-
tation of the robots. In this work, we employ the arena tracking
system (ATS) [8], a multi-camera tracking system developed at
the IRIDIA lab. The ATS is composed of a 4×4matrix of
HD cameras and a dedicated 16-core server which runs custom
software based on the Halcon libraries for machine vision1.
The camera matrix covers an area of about 10 ×7m2.
Between each pair of neighbouring cameras there is an over-
lapping region that is covered by both cameras. The overlap
between cameras allows the ATS to robustly handle the track-
ing of robots that move between two cameras’ field-of-view.
The tracking software transparently manages image merging
and robot tracking: it receives as input an XML configuration
file and returns as output the robots’ position list. The input
configuration file allows a user to select the active cameras and
1http://www.halcon.com
Fig. 1: Graphical representation of the proposed virtual sensing technology.
to tune each camera parameters to perform tracking at different
speeds and under different light conditions.
The arena tracking system detects the robots through paper
printed markers placed on the top of the robots. The markers
are composed of a matrix of 6 cells (2×3), similar to a very
simple QR code. This setup yields 64 different configurations;
all of them are valid configurations that the ATS recognises.
This markup method has the advantage of rendering it simple
and quick to setup an experiment. In fact, the markers can be
printed by any black-and-white printer on standard paper and
there are no configurations in mutual exclusions or algorithms
to compute the set of valid configurations, any subset of the
64 ones is valid.
Large coverage area, ease of configuration, and large max-
imum swarm size are the main features that characterise the
ATS with respect to other tracking systems [8]. Nevertheless,
every tracking system can be used for our virtual sensing
system, as long as the tracking system returns the location
and orientation of the robots in real time.
The set of positions of each robot, i.e., locations and
orientations, identifies the arena state which is transmitted to
the swarm robotics simulator to compute the virtual sensor
readings (see Section III-B). The communication between the
ATS and the simulator is achieved through a client/server
architecture on TCP/IP. The ATS provides a server named
arena tracking system server (ATS-S) that waits for connections
from the simulator. The connection must happen before the
robot experiment begins. Once connected, the ATS-S sends to
the simulator the last computed arena state at its maximum
tracking rate. Since image processing time is variable, the
tracking rate is not constant and the server transmits the new
arena state as soon as it is computed. Until a new arena state is
received, the simulator uses the last known arena state. During
our experiment, the arena state transmission average period
is about 90 ms, which determines an acceptable rate for our
purposes.
B. Robot swarm simulator: ARGoS
The robot simulator we use is ARGoS [9], a modular
physics-based simulator tailored for swarm robotics. One of
the main features of ARGoS is its modularity, which allows
a user to extend the simulator by developing custom plugins.
We exploited ARGoS modularity by implementing a custom
physics engine named arena tracking system physics engine
(ATS-PE) that controls the motion of the simulated robots. A
physics engine is the simulator module dedicated to compute
and update at each timestep the world state, including the
robots positions.
The ATS-PE is the focal point of the whole virtual sensor
architecture. Through the ATS-PE, the control of the ex-
periment is transferred from reality to simulation. ATS-PE
connects to the ATS-S to receive the arena state from the
tracking system, and updates with this information the position
of the simulated robots. Then, ARGoS computes the virtual
sensor readings for each simulated robot. When the virtual
sensor readings have been computed, the ATS-PE transmits
them to the (real) robots, and finally the robot control software
reads these values as if they were from normal sensors.
While the ATS runs on a dedicated high-performance
machine, the simulator can run on a general-purpose com-
puter without affecting the overall system performance. To
communicate with the tracking system and the robots, ATS-PE
includes a client called ATS client (ATS-C), and a server called
virtual sensor server (VS-S). The ATS client is a thread respon-
sible for establishing a connection to the ATS through ethernet
LAN, and periodically receives the arena state. The arena
state is encapsulated in a data structure organised according
to a double-buffer design. The double buffer guarantees the
consistency of the arena state and minimises the duration of
the lock session on the data structure. The ATS client stores
the computed arena state in one buffer and updates the other
one. At the end of the procedure, the ATS client swaps the
buffers.
The VS-S executes a thread that constantly waits for robot
connections. Upon establishing a connection, the VS-S sends
the robot the necessary information to initialise the virtual
sensor. Robots can connect dynamically to the virtual sensor
server in any stage of the experiment through a thread running
on them, called virtual sensor client (VS-C). At the end of
each simulation timestep, the VS-S transmits the virtual sensor
readings to the corresponding robots. The duty of serialising
and deserialising the data structure is delegated to the imple-
mentation of the single virtual sensor (see Section III-C).
C. Virtual sensors
A virtual sensor is composed of two software modules:
one module running in the ARGoS simulator that computes
the sensor’s readings, (called virtual sensor simulation module,
VS-SM) and a second module running on the robot that allows
the control software to receive the virtual sensor readings
(virtual sensor real robot module,VS-RRM).
The simulation module is similar to a simulated sensor: it
uses robot position and simulated environment information to
compute the sensor reading at each timestep. Additionally to a
classic simulated sensor, the VS-SM extends a generic virtual
sensor interface which provides the functionality of serialising
the reading values and copy them to the output buffer of the
VS-S (see Section III-B).
The VS-RRM appears to the robot control software as
a normal sensor, however its readings do not result from
a dedicated hardware component, instead the readings are
retrieved from the input buffer of the VS-C. Similarly to the
VS-SM, the VS-RRM implements the virtual sensor interface
that provides the specular functionality of deserialising the
readings received from the VS-C’s input buffer.
We tailored the virtual sensor interface and the virtual
sensor client for the e-puck robot [10], a small-size wheeled
robot. Decoupling the virtual sensor server from the virtual
sensors data structure releases the virtual sensor designer of
any constraint regarding the data structure, allowing him/her
to apply an arbitrary level of complexity. In addition, through
the proposed architecture, the robot control software accesses
virtual sensor readings in a transparent way, in fact, at robot
control software level, there is no distinction between real and
simulated sensors.
As stated above, one of the main advantages of the pro-
posed technology is the simplicity of implementing a new vir-
tual sensor. Through the proposed technology, providing robots
with novel sensing capabilities requires a very small amount of
work because, once the infrastructure is in place, most part of
the components do not need any further modification. In fact,
to add a new virtual sensor, it is sufficient to implement the
VS-SM and VS-RRM modules and register the new sensor’s
name in the VS-C and VS-S. Most of this implementation is
straightforward and automated; the crucial part is the design
and implementation of (i) the data structure that embeds the
virtual sensor readings (e.g., a single real number vs. a list
of nbits), and (ii) the logic that ARGoS will use to compute
these readings given the robot position and the environment
state.
IV. ROBOT EXPERIMENT
We illustrate the architecture presented above through an
experiment in which a swarm of 15 e-pucks equipped with
virtual sensors is able to perceive an augmented reality en-
vironment. In this experiment, the robots are equipped with a
virtual pollutant sensor. The pollutant is simulated via ARGoS.
The sensor returns a binary value indicating whether there is a
presence of pollutant at the robot location. In our experiment,
we assume that the pollutant is present in an area within a
diffusion cone with vertex located at the pollutant source σ.
The diffusion cone is characterised by the direction angle θd
that indicates the direction of diffusion, the cone width θw.
The parameters θdand θware defined at the beginning of the
experiment by the user. In contrast, the location of the pollutant
source σis defined through an ATS marker placed in the real
environment, therefore, it is computed at runtime and may
change during the experiment. In the implemented pollutant
TABLE II: Parameter values used in the robot experiment.
θd10θr100
θw45Tr60 s
Ps0.3
diffusion model, we included a time variant component which
turns the direction angle θdof θrdegrees counterclockwise
after Trseconds from the beginning of the experiment. Table II
shows the parameter values we have used for our experiment.
The same robot control software runs on all the robots
of the swarm. The software encodes a very simple robot
behaviour that has been designed with the purpose of offering
an easy-to-understand experiment that illustrates the proposed
technology. The robots move randomly within an hexagonal
arena and, when a robot perceives the pollutant (i.e., it lies
within the diffusion cone defined above), it stops and lights up
its red LEDs with probability Ps= 0.3per timestep. Figure 2
shows two screenshots of the experiment and the full video
can be found at https://youtu.be/7QAWi5JDwzA.
In this setup, the robots receive noiseless readings from
their virtual sensors; however, if desired, a virtual sensor
can be designed with a more realistic characterisation to
include noise. Despite its simplicity, we deem this pollutant
diffusion experiment sufficient to illustrate the elements of
the architecture and showcase one of the main advantages of
virtual sensing. We show the possibility of a cheap, quick and
safe implementation that enables experimentations in scenarios
involving dangerous components. The polluted environment
we present is a simple example of the type of scenarios that are
normally very complex to setup within a laboratory; however,
through virtual sensing, experiments in such a scenario can
be performed with very limited effort. Additionally, in the
presented experiment, the pollutant diffusion model includes
a time-variant component that illustrates the possibility of
performing experiments in environment that changes over time
according to any user defined model.
V. CONCLUSIONS AND FUTURE WORK
In this paper, we presented a novel technology to enable
augmented reality for robots via virtual sensing. We proposed
an architecture based on a tracking system, a simulator and
virtual sensor modules running on the robots. The proposed
architecture allows a robot swarm to perceive in real time an
augmented reality environment that may include components
difficult or impossible to create within a lab. These extended
capabilities pave the way for novel types of experimentation
with robot swarms. For example, an envisioned case study
includes the simulation of radioactive emissions in a nuclear
disaster site. The use of time-variant features is particularly
useful to perform experiments that investigate the swarm
flexibility on external changes, e.g., changes of environmental
features. Additionally, the proposed technology allows one
to prototype novel sensors prior to their production. This
procedure may reduce the risk of wrong hardware production
by allowing the preliminary verification of the functioning of
the algorithms of interest.
A limitation of the proposed technology is that it does not
operate in environments where objects taller than the robots
occlude the field of view of the ATS ’s ceiling cameras.
For instance, experiments involving walking humans may
hinder the proper functioning of the system because in certain
situations the human body may interfere with the tracking of
the robots.
Future developments include furthering the design of our
systems, endowing it with virtual actuators capable of modify-
ing the augmented reality. A viable approach currently under
study to achieve this result is to close the communication
loop between the robots and the simulator. In other words, the
virtual action of a robot consists in the delivery of a message
from the robot to the simulator. The latter, in turn, processes
the message and applies the modification to the simulated
environment. In this way, the modification becomes available
for virtual sensing by the rest of the swarm.
The realisation of virtual actuation would bring the same
advantages given by the virtual sensing technology, applied
on actuators instead of sensors. We could prototype actuators
as we can do for sensors, and additionally we could perform
experiments in fully dynamic virtual environments that change
in response to the robot actions. An appealing idea is to
implement virtual actuators that, while impossible to install
on the e-puck (or similar low-cost robots), would enable
complex and affordable experimental studies on swarm dy-
namics. For instance, from the combination of virtual sensing
and actuation, experiments with virtual pheromone would be
easy accessible and would facilitate studies on the role of
stigmergy in self-organisation. Virtual sensing and actuation
technology offers a clean, flexible and software based solution
to investigate, by means of real robots experiments, fields of
swarm robotics that have been prerogative of simulation.
ACKNOWLEDGMENT
This work was partially supported by the European Re-
search Council through the ERC Advanced Grant “E-SWARM:
Engineering Swarm Intelligence Systems” (contract 246939).
Marco Dorigo and Mauro Birattari acknowledge support from
the Belgian F.R.S.-FNRS. We thank Bernard Mayeur for
helping in refactoring the code of the virtual sensor modules.
REFERENCES
[1] M. Dorigo, M. Birattari, and M. Brambilla, “Swarm
robotics,” Scholarpedia, vol. 9, no. 1, p. 1463, 2014.
[2] M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo,
“Swarm robotics: A review from the swarm engineering
perspective,Swarm Intelligence, vol. 7, no. 1, pp. 1–41,
2013.
[3] P. J. O’Dowd, A. F. T. Winfield, and M. Studley, “The
distributed co-evolution of an embodied simulator and
controller for swarm robot behaviours,” in IROS. IEEE,
2011, pp. 4995–5000.
[4] J. D. Bjerknes, W. Liu, A. F. Winfield, C. Melhuish,
and C. Lane, “Low cost ultrasonic positioning system
for mobile robots,” Proceeding of Towards Autonomous
Robotic Systems (TAROS 2007), pp. 107–114, 2007.
[5] K. Sugawara, T. Kazama, and T. Watanabe, “Foraging
Behavior of Interacting Robots with Virtual Pheromone,”
in Proceedings of IEEE/RSJ International Conference on
Intelligent Robots and Systems. Los Alamitos, CA: IEEE
Press, 2004, pp. 3074–3079.
Fig. 2: Screenshot of the running experiment: On the left, the simulated environment in ARGoS and, on the right, an aerial view
of the robots, with an overlay representing the pollutant cone. The simulated robot positions matches with the real robot ones.
The object placed in the center of the arena represents the pollutant source σand, in the simulated environment, the green cone
highlights the area where robots perceive the pollutant. In the real environment, robots within the pollutant cone perceive the
pollutant through their virtual sensors and light up their red LEDs with probability Ps.
[6] S. Garnier, F. Tache, M. Combe, A. Grimal, and G. Ther-
aulaz, “Alice in pheromone land: An experimental setup
for the study of ant-like robots,Swarm Intelligence
Symposium. SIS 2007. IEEE., pp. 37–44, 2007.
[7] A. A. Khaliq, M. D. Rocco, and A. Saffiotti, “Stigmergic
algorithms for multiple minimalistic robots on an RFID
floor,Swarm Intelligence, vol. 8, no. 3, pp. 199–225,
2014.
[8] A. Stranieri, A. Turgut, M. Salvaro, L. Garattoni,
G. Francesca, A. Reina, M. Dorigo, and M. Birat-
tari, “IRIDIA’s arena tracking system,” IRIDIA, Univer-
sit´
e Libre de Bruxelles, Brussels, Belgium, Tech. Rep.
TR/IRIDIA/2013-013r004, 2015.
[9] C. Pinciroli, V. Trianni, R. O’Grady, G. Pini, A. Brutschy,
M. Brambilla, N. Mathews, E. Ferrante, G. A. Di Caro,
F. Ducatelle, M. Birattari, L. M. Gambardella, and
M. Dorigo, “ARGoS: a modular, parallel, multi-engine
simulator for multi-robot systems,” Swarm Intelligence,
vol. 6, no. 4, pp. 271–295, 2012.
[10] F. Mondada, M. Bonani, X. Raemy, J. Pugh, C. Cianci,
A. Klaptocz, S. Magnenat, J.-C. Zufferey, D. Floreano,
and A. Martinoli, “The e-puck, a robot designed for
education in engineering,” in Proceedings of the 9th Con-
ference on Autonomous Robot Systems and Competitions,
2009, pp. 59–65.
... Talamali et al. (2020) showed how swarms of robots equipped with very simple communication devices based on virtual pheromones were capable of producing cooperating behavior with hundreds of agents. Reina et al. (2015) proposed to use virtual perception to improve the design procedures of swarms of mobile robots. Similarly to our work, Sharma et al. (2020) studied the impact of perception and communication (via morphological computation) capabilities for modular robots trying to pass through a narrow space. ...
Article
Modular robots are collections of simple embodied agents, the modules, that interact with each other to achieve complex behaviors. Each module may have a limited capability of perceiving the environment and performing actions; nevertheless, by behaving coordinately, and possibly by sharing information, modules can collectively perform complex actions. In principle, the greater the actuation, perception, and communication abilities of the single module are the more effective is the collection of modules. However, improved abilities also correspond to more complex controllers and, hence, larger search spaces when designing them by means of optimization. In this article, we analyze the impact of perception, actuation, and communication abilities on the possibility of obtaining good controllers for simulated modular robots, that is, controllers that allow the robots to exhibit collective intelligence. We consider the case of modular soft robots, where modules can contract, expand, attach, and detach from each other, and make them face two tasks (locomotion and piling), optimizing their controllers with evolutionary computation. We observe that limited abilities often do not prevent the robots from succeeding in the task, a finding that we explain with (a) the smaller search space corresponding to limited actuation, perception, and communication abilities, which makes the optimization easier, and (b) the fact that, for this kind of robot, morphological computation plays a significant role. Moreover, we discover that what matters more is the degree of collectivity the robots are required to exhibit when facing the task.
... Dada as coordenadas de cada robô no ambiente, via imagem capturada pela câmera, foi traçado um semicírculo virtual a partir das coordenadas do robô (centro do marcador), conforme ilustrado na Figura 3, que representa aárea de detecção de obstáculos de cada robô móvel. Talárea tem o objetivo de estabelecer um sistema de sensoriamento virtual para os robôs, como explorado em(Makhataeva and Varol, 2020;Reina et al., 2015), por meio das habilidades virtuais conferidas pelo ambiente de realidade mista. Com isso em mente, cada robô possui uma inteligência mínima para determinar se há um, ou mais, obstáculos em suaárea de detecção e então executar manobras de desvio.Figura 3. Funcionamento do sensor virtual dos robôs.Do ponto de vista de execução do algoritmo embarcado em cada robô móvel, a cada iteração são acessados os tópicos ROS correspondentesàs suas coordenadas em relação ao ambiente e também uma lista atualizada de coordenadas de objetos próximos, seja ela composta por qualquer outro marcador ARTag (incluindo outros robôs, como obstáculos Tabela 1. Componentes de cada Robô. ...
Conference Paper
In this paper, a mixed reality environment for navigation of multiple mobile robots is proposed. The developed environment allows the integration between virtual and real elements. To identify the real elements, ARTags markers are used, which can be obstacles, targets, robots or objectives, while the virtual ones (inserted into a virtual layer) are elements that can interact with the robots, such as virtual sensors, goals, obstacles, among others, depending on the experiment to be carried out. The Robot Operating System framework is used as a communication tool between the real and virtual layers. As a result, robots with hardware constraints acquire the skill to reach and intercept a moving goal. Three experiments are presented, highlighting the capacity of the proposed tool to provide the navigation of multiple robots in the presence of static and dynamic obstacles and targets. The obtained results validate the proposed mixed reality environment as a flexible experimental tool for the navigation of multiple robots with limited hardware.
... The movements of the physical robots were tracked using markers, which were composed of a matrix of 6 cells (2x3), similar to a very simple QR code. They used the ARGoS simulator since it allows them to simulate complex experiments with different types of swarm robots [23]. ...
Article
Full-text available
The term “Swarm intelligence” outlines a broad scope, which is generally defined as the collective behaviour of many individuals towards a certain task, each operating autonomously in a decentralised manner. Swarms are inspired by the type of biological behaviours of animals and this technology involves decentralised local control and communication, which ensures the problem-solving process is more efficient through modification and infusion into swarm robotic technology. One such application is Mixed Reality, which links both the real and virtual environments through Augmented Reality (AR) and Augmented Virtuality (AV). Enabling a robot to sense both physical and virtual environments via augmented means allows for the ability to interact with both environments. Swarm robotics related experiments and applications are relatively expensive compared to other types of robotics experiments. The most common solution is to use computer-based virtual simulations. However, as it lacks the real-world guarantee of execution, this study introduces a hybrid solution that can simultaneously execute swarm behavioural experiments on actual robots and virtually deployed robots, which can be used to test the functionalities of swarm robots with a large number of different environmental arrangements easier than physically creating them. As identifying the necessary requirements for implementing a mixed reality environment for swarm robotics simulation was this study’s main focus, the basic interaction models were designed to conduct experiments with both physical and virtual robots in near real-time in a mixed reality environment. In addition, an open-source, distributed, and modular mixed reality simulation framework was implemented with support libraries and applications.
... Moreover, the robot's safety interaction is discussed in [46]. Reina et al. [47] demonstrated that AR could be perceived in multi-robot using virtual sensing technologies. The authors described an AR-based virtual sensing framework. ...
Conference Paper
The Digital Twin (DT) in the manufacturing domain is already the everyday tool for visualizing the various industrial systems, equipment, and produced products. When designing a new manufacturing unit or enlarging an existing factory, it is important to do so without affecting the manufacturing process flow itself. There are opportunities through simulation and digital manufacturing to plan and optimize this design process. Within usage of the actual physical machinery data gathered from the Industrial Internet of Things (IIoT) sensors and feeding to the DT, optimizing the layout can be done more precisely and effectively. However, there is no way to test the potential equipment simultaneously with the physical one in real-time. This paper aims to propose a Mixed Reality (MR) based system framework and toolkit, which will enable physical industrial robots to interact with virtual equipment and other virtual robots. This way, via Virtual Reality (VR), it will be possible to design a system layout. Furthermore, via the Augmented Reality (AR) view, it will be possible to simulate the interaction between multiple robots by enhancing the possibilities of the physical environment and using the new precise scale real-time design method.
... Several natural systems [6,8,14,16,17] use stigmergy as a recruitment strategy, wherein the agents leave signals such as pheromones in the environment. This serves as a spatio-temporal memory to harness more individuals into the collective, and has inspired the design of synthetic systems [18][19][20][21][22][23][24][25][26][27][28][29]. Then, task execution using stigmergy can be thought of as a triadic interaction between three relevant variables: the agents, the stigmergic communication field, and the environment (see Fig. 1(d)) which vary spatio-temporally towards task execution. ...
Preprint
Full-text available
Cooperative task execution, a hallmark of eusociality, is enabled by local interactions between the agents and the environment through a dynamically evolving communication signal. Inspired by the collective behavior of social insects whose dynamics is modulated by interactions with the environment, we show that a robot collective can successfully nucleate a construction site via a trapping instability and cooperatively build organized structures. The same robot collective can also perform de-construction with a simple change in the behavioral parameter. These behaviors belong to a two-dimensional phase space of cooperative behaviors defined by agent-agent interaction (cooperation) along one axis and the agent-environment interaction (collection and deposition) on the other. Our behavior-based approach to robot design combined with a principled derivation of local rules enables the collective to solve tasks with robustness to a dynamically changing environment and a wealth of complex behaviors.
... In this way, the perception of the environment by micro-robots with limited sensory capabilities can be enhanced by virtual sensors, allowing researchers to study their movements in more complex situations. In [55], researchers built an MR sensing system for swarm robots, which includes a robot swarm with virtual sensors, an MR platform to complete virtual scenario modeling, and a vision system to collect the real environment information. With this system, researchers were able to control and design the movement and behavior of swarm robots in complex scenarios. ...
... For the first problem, researchers have combined augmented reality (AR) technology with mobile robot research in recent years [2]. Reina et al. [3] proposed the concept "virtual sensors" and presented an AR-based sensing framework to control the swarm e-puck robots. Omidshafiei et al. [4] designed a novel robotics platform for hardware-in-theloop experiments, called measurable augmented reality for prototyping cyber-physical systems (MAR-CPS). ...
Conference Paper
Full-text available
Augmented reality (AR) technology has been introduced into the robotics field to narrow the visual gap between indoor and outdoor environments. However, without signals from satellite navigation systems, flight experiments in these indoor AR scenarios need other accurate localization approaches. This work proposes a real-time centimeter-level indoor localization method based on psycho-visually invisible projected tags (IPT), requiring a projector as the sender and quadrotors with high-speed cameras as the receiver. The method includes a modulation process for the sender, as well as demodulation and pose estimation steps for the receiver, where screen-camera communication technology is applied to hide fiducial tags using human vision property. Experiments have demonstrated that IPT can achieve accuracy within ten centimeters and a speed of about ten FPS. Compared with other localization methods for AR robotics platforms, IPT is affordable by using only a projector and high-speed cameras as hardware consumption and convenient by omitting a coordinate alignment step. To the authors' best knowledge, this is the first time screen-camera communication is utilized for AR robot localization.
Article
Research with swarm robotics systems can be complicated, time-consuming, and often expensive in terms of space and resources. The situation is even worse for studies involving multiple, possibly heterogeneous robot swarms. Augmented reality can provide an interesting solution to these problems, as demonstrated by the ARK system (Augmented Reality for Kilobots), which enhanced the experimentation possibilities with Kilobots, also relieving researchers from demanding tracking and logging activities. However, ARK is limited in mostly enabling experimentation with a single swarm. In this paper, we introduce M-ARK, a system to support studies on multi-swarm interaction. M-ARK is based on the synchronisation over a network connection of multiple ARK systems, whether real or simulated, serving a twofold purpose: (i) to study the interaction of multiple, possibly heterogeneous swarms, and (ii) to enable a gradual transition from simulation to reality. Moreover, M-ARK enables the interaction between swarms dislocated across multiple labs worldwide, encouraging scientific collaboration and advancement in multi-swarm interaction studies.
Article
We present a novel augmented reality (AR) framework to show relevant information about swarm dynamics to a user in the absence of markers by using blinking frequency to distinguish between groups in the swarm. In order to distinguish between groups, clusters of the same group are identified by blinking at a specific time interval that is distinct from the time interval at which their neighbors blink. The problem is thus to find blinking sequences that are distinct for each group with respect to the group’s neighbors. Selecting an appropriate sequence is an instance of the distributed graph coloring problem, which can be solved in O(log(n)) time with n being the number of robots involved. We demonstrate our approach using a swarm chemistry simulation in which robots simulate individual atoms that form molecules following the rules of chemistry. An AR display is then used to display information about the internal state of individual swarm members as well as their topological relationship, corresponding to molecular bonds in a context that uses robot swarms to teach basic chemistry concepts.
Article
Full-text available
Stigmergy is a powerful principle in nature, which has been shown to have interesting applications to robotic systems. By leveraging the ability to store information in the environment, robots with minimal sensing, memory, and computational capabilities can solve complex problems like global path planning. In this paper, we discuss the use of stigmergy in minimalist multi-robot systems, in which robots do not need to use any internal model, long-range sensing, or position awareness. We illustrate our discussion with three case studies: building a globally optimal navigation map, building a gradient map of a sensed feature, and updating the above maps dynamically. All case studies have been implemented in a real environment with multiple ePuck robots, using a floor with 1,500 embedded radio frequency identification tags as the stigmergic medium. Results collected from tens of hours of real experiments and thousands of simulated runs demonstrate the effectiveness of our approach.
Article
Full-text available
We investigate the reality gap, specifically the environmental correspondence of an on-board simulator. We describe a novel distributed co-evolutionary approach to improve the transference of controllers that co-evolve with an on-board simulator. A novelty of our approach is the the potential to improve transference between simulation and reality without an explicit measurement between the two domains. We hypothesise that a variation of on-board simulator environment models across many robots can be competitively exploited by comparison of the real controller fitness of many robots. We hypothesise that the real controller fitness values across many robots can be taken as indicative of the varied fitness in environmental correspondence of on-board simulators, and used to inform the distributed evolution an on-board simulator environment model without explicit measurement of the real environment. Our results demonstrate that our approach creates an adaptive relationship between the on-board simulator environment model, the real world behaviour of the robots, and the state of the real environment. The results indicate that our approach is sensitive to whether the real behavioural performance of the robot is informative on the state real environment.
Article
Full-text available
We present a novel multi-robot simulator named ARGoS. ARGoS is designed to simulate complex experiments involving large swarms of robots of different types. ARGoS is the first multi-robot simulator that is at the same time both efficient (fast performance with many robots) and flexible (highly customizable for specific experiments). Novel design choices in ARGoS have enabled this breakthrough. First, in ARGoS, it is possible to partition the simulated space into multiple sub-spaces, managed by different physics engines running in parallel. Second, ARGoS’ architecture is multi-threaded, thus designed to optimize the usage of modern multi-core CPUs. Finally, the architecture of ARGoS is highly modular, enabling easy addition of custom features and appropriate allocation of computational resources. We assess the efficiency of ARGoS and showcase its flexibility with targeted experiments. Experimental results demonstrate that simulation run-time increases linearly with the number of robots. A 2D-dynamics simulation of 10,000 e-puck robots can be performed in 60 % of the time taken by the corresponding real-world experiment. We show how ARGoS can be extended to suit the needs of an experiment in which custom functionality is necessary to achieve sufficient simulation accuracy. ARGoS is open source software licensed under GPL3 and is downloadable free of charge.
Article
Full-text available
Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions.
Article
Full-text available
This paper describes a simple low-cost 2D and 3D positioning system for mobile robots. The system makes use of both ultrasound and radio frequency sig-nals to achieve positional accuracies better than 1% of the tracked volume. Furthermore, we have inte-grated the positioning system into the simulation tools Player/Stage, in order to combine simulated robots with real robots in a 'hybrid' embodied simulation. The paper presents measurements from using the po-sitioning system for both 3D tracking and 3D closed loop control.
Conference Paper
Full-text available
Embodied fitness assessment of robotic controllers is slow but grounded, while assessment in a simulated environment is fast but can run foul of the ‘reality gap’. We present a distributed co-evolutionary method to adapt the environmental model of an on-board simulator within the context of swarm robotics.
Article
Full-text available
Mobile robots have the potential to become the ideal tool to teach a broad range of engineering disciplines. Indeed, mobile robots are getting increasingly complex and accessible. They embed elements from diverse fields such as mechanics, digital electronics, automatic control, signal processing, embedded programming, and energy management. Moreover, they are attractive for students which increases their motivation to learn. However, the requirements of an effective education tool bring new constraints to robotics. This article presents the e-puck robot design, which specifically targets engineering education at university level. Thanks to its particular design, the e-puck can be used in a large spectrum of teaching activities, not strictly related to robotics. Through a systematic evaluation by the students, we show that the e-puck fits this purpose and is appreciated by 90 percent of a large sample of students.
Conference Paper
Full-text available
The pheromone trail laying and trail following behaviors of ants have proved to be an efficient mechanism to optimize path selection in natural as well as in artificial networks. Despite this efficiency, this mechanism is under-used in collective robotics because of the chemical nature of pheromones. In this paper we present a new experimental setup which allows to investigate with real robots the properties of a robotics systems using such behaviors. To validate our setup, we present the results of an experiment in which a group of 5 robots has to select between two identical alternatives a path linking two different areas. Moreover, a set of computer simulations provides a more complete exploration of the properties of this system. At last, experimental and simulation results lead us to interesting prediction that will be testable in our setup.
Conference Paper
In multi-robot system, communication is indispensable for effective cooperative working. In this system, direct communication by physical methods such as light, sound, radio wave is quite general. But in biological system, especially in the insect world, not only the physical but also the chemical communication methods can be observed. As the chemical methods have some unique properties, it is challenging to apply such a method to the cooperative multi-robot system. Unfortunately, to treat real chemical materials for the robots is not easy for now because of some technical difficulties. In this paper, we propose virtual pheromone system in which chemical signals are simulated with the graphics projected on the floor, and in which the robots decide their action depending on the color information of the graphics. We examined the performance of this system through the foraging task, which is one of the most popular tasks for multi-robot system and is generally observed in ant societies.