Fig 2 - uploaded by Gianpiero Francesca
Content may be subject to copyright.
Screenshot of the running experiment: On the left, the simulated environment in ARGoS and, on the right, an aerial view of the robots, with an overlay representing the pollutant cone. The simulated robot positions matches with the real robot ones. The object placed in the center of the arena represents the pollutant source σ and, in the simulated environment, the green cone highlights the area where robots perceive the pollutant. In the real environment, robots within the pollutant cone perceive the pollutant through their virtual sensors and light up their red LEDs with probability P s . 

Screenshot of the running experiment: On the left, the simulated environment in ARGoS and, on the right, an aerial view of the robots, with an overlay representing the pollutant cone. The simulated robot positions matches with the real robot ones. The object placed in the center of the arena represents the pollutant source σ and, in the simulated environment, the green cone highlights the area where robots perceive the pollutant. In the real environment, robots within the pollutant cone perceive the pollutant through their virtual sensors and light up their red LEDs with probability P s . 

Source publication
Conference Paper
Full-text available
We present a novel technology that allows real robots to perceive an augmented reality environment through virtual sensors. Virtual sensors are a useful and desirable technology for research activities because they allow researchers to quickly and efficiently perform experiments that would otherwise be more expensive, or even impossible. In particu...

Context in source publication

Context 1
... robots move randomly within an hexagonal arena and, when a robot perceives the pollutant (i.e., it lies within the diffusion cone defined above), it stops and lights up its red LEDs with probability P s = 0.3 per timestep. Figure 2 shows two screenshots of the experiment and the full video can be found at https://youtu.be/7QAWi5JDwzA. ...

Similar publications

Article
Full-text available
We present modelling results for efficient coupling of nanodiamonds containing single colour centres to polymer structures on distributed Bragg reflectors. We explain how hemispherical and super-spherical structures redirect the emission of light into small numerical apertures. Coupling efficiencies of up to 68.5% within a numerical aperture of 0.3...

Citations

... Talamali et al. (2020) showed how swarms of robots equipped with very simple communication devices based on virtual pheromones were capable of producing cooperating behavior with hundreds of agents. Reina et al. (2015) proposed to use virtual perception to improve the design procedures of swarms of mobile robots. Similarly to our work, Sharma et al. (2020) studied the impact of perception and communication (via morphological computation) capabilities for modular robots trying to pass through a narrow space. ...
Article
Modular robots are collections of simple embodied agents, the modules, that interact with each other to achieve complex behaviors. Each module may have a limited capability of perceiving the environment and performing actions; nevertheless, by behaving coordinately, and possibly by sharing information, modules can collectively perform complex actions. In principle, the greater the actuation, perception, and communication abilities of the single module are the more effective is the collection of modules. However, improved abilities also correspond to more complex controllers and, hence, larger search spaces when designing them by means of optimization. In this article, we analyze the impact of perception, actuation, and communication abilities on the possibility of obtaining good controllers for simulated modular robots, that is, controllers that allow the robots to exhibit collective intelligence. We consider the case of modular soft robots, where modules can contract, expand, attach, and detach from each other, and make them face two tasks (locomotion and piling), optimizing their controllers with evolutionary computation. We observe that limited abilities often do not prevent the robots from succeeding in the task, a finding that we explain with (a) the smaller search space corresponding to limited actuation, perception, and communication abilities, which makes the optimization easier, and (b) the fact that, for this kind of robot, morphological computation plays a significant role. Moreover, we discover that what matters more is the degree of collectivity the robots are required to exhibit when facing the task.
... The movements of the physical robots were tracked using markers, which were composed of a matrix of 6 cells (2x3), similar to a very simple QR code. They used the ARGoS simulator since it allows them to simulate complex experiments with different types of swarm robots [23]. ...
Article
Full-text available
The term “Swarm intelligence” outlines a broad scope, which is generally defined as the collective behaviour of many individuals towards a certain task, each operating autonomously in a decentralised manner. Swarms are inspired by the type of biological behaviours of animals and this technology involves decentralised local control and communication, which ensures the problem-solving process is more efficient through modification and infusion into swarm robotic technology. One such application is Mixed Reality, which links both the real and virtual environments through Augmented Reality (AR) and Augmented Virtuality (AV). Enabling a robot to sense both physical and virtual environments via augmented means allows for the ability to interact with both environments. Swarm robotics related experiments and applications are relatively expensive compared to other types of robotics experiments. The most common solution is to use computer-based virtual simulations. However, as it lacks the real-world guarantee of execution, this study introduces a hybrid solution that can simultaneously execute swarm behavioural experiments on actual robots and virtually deployed robots, which can be used to test the functionalities of swarm robots with a large number of different environmental arrangements easier than physically creating them. As identifying the necessary requirements for implementing a mixed reality environment for swarm robotics simulation was this study’s main focus, the basic interaction models were designed to conduct experiments with both physical and virtual robots in near real-time in a mixed reality environment. In addition, an open-source, distributed, and modular mixed reality simulation framework was implemented with support libraries and applications.
... Moreover, the robot's safety interaction is discussed in [46]. Reina et al. [47] demonstrated that AR could be perceived in multi-robot using virtual sensing technologies. The authors described an AR-based virtual sensing framework. ...
Conference Paper
The Digital Twin (DT) in the manufacturing domain is already the everyday tool for visualizing the various industrial systems, equipment, and produced products. When designing a new manufacturing unit or enlarging an existing factory, it is important to do so without affecting the manufacturing process flow itself. There are opportunities through simulation and digital manufacturing to plan and optimize this design process. Within usage of the actual physical machinery data gathered from the Industrial Internet of Things (IIoT) sensors and feeding to the DT, optimizing the layout can be done more precisely and effectively. However, there is no way to test the potential equipment simultaneously with the physical one in real-time. This paper aims to propose a Mixed Reality (MR) based system framework and toolkit, which will enable physical industrial robots to interact with virtual equipment and other virtual robots. This way, via Virtual Reality (VR), it will be possible to design a system layout. Furthermore, via the Augmented Reality (AR) view, it will be possible to simulate the interaction between multiple robots by enhancing the possibilities of the physical environment and using the new precise scale real-time design method.
... Dada as coordenadas de cada robô no ambiente, via imagem capturada pela câmera, foi traçado um semicírculo virtual a partir das coordenadas do robô (centro do marcador), conforme ilustrado na Figura 3, que representa aárea de detecção de obstáculos de cada robô móvel. Talárea tem o objetivo de estabelecer um sistema de sensoriamento virtual para os robôs, como explorado em(Makhataeva and Varol, 2020;Reina et al., 2015), por meio das habilidades virtuais conferidas pelo ambiente de realidade mista. Com isso em mente, cada robô possui uma inteligência mínima para determinar se há um, ou mais, obstáculos em suaárea de detecção e então executar manobras de desvio.Figura 3. Funcionamento do sensor virtual dos robôs.Do ponto de vista de execução do algoritmo embarcado em cada robô móvel, a cada iteração são acessados os tópicos ROS correspondentesàs suas coordenadas em relação ao ambiente e também uma lista atualizada de coordenadas de objetos próximos, seja ela composta por qualquer outro marcador ARTag (incluindo outros robôs, como obstáculos Tabela 1. Componentes de cada Robô. ...
Conference Paper
In this paper, a mixed reality environment for navigation of multiple mobile robots is proposed. The developed environment allows the integration between virtual and real elements. To identify the real elements, ARTags markers are used, which can be obstacles, targets, robots or objectives, while the virtual ones (inserted into a virtual layer) are elements that can interact with the robots, such as virtual sensors, goals, obstacles, among others, depending on the experiment to be carried out. The Robot Operating System framework is used as a communication tool between the real and virtual layers. As a result, robots with hardware constraints acquire the skill to reach and intercept a moving goal. Three experiments are presented, highlighting the capacity of the proposed tool to provide the navigation of multiple robots in the presence of static and dynamic obstacles and targets. The obtained results validate the proposed mixed reality environment as a flexible experimental tool for the navigation of multiple robots with limited hardware.
... Several natural systems [6,8,14,16,17] use stigmergy as a recruitment strategy, wherein the agents leave signals such as pheromones in the environment. This serves as a spatio-temporal memory to harness more individuals into the collective, and has inspired the design of synthetic systems [18][19][20][21][22][23][24][25][26][27][28][29]. Then, task execution using stigmergy can be thought of as a triadic interaction between three relevant variables: the agents, the stigmergic communication field, and the environment (see Fig. 1(d)) which vary spatio-temporally towards task execution. ...
Preprint
Full-text available
Cooperative task execution, a hallmark of eusociality, is enabled by local interactions between the agents and the environment through a dynamically evolving communication signal. Inspired by the collective behavior of social insects whose dynamics is modulated by interactions with the environment, we show that a robot collective can successfully nucleate a construction site via a trapping instability and cooperatively build organized structures. The same robot collective can also perform de-construction with a simple change in the behavioral parameter. These behaviors belong to a two-dimensional phase space of cooperative behaviors defined by agent-agent interaction (cooperation) along one axis and the agent-environment interaction (collection and deposition) on the other. Our behavior-based approach to robot design combined with a principled derivation of local rules enables the collective to solve tasks with robustness to a dynamically changing environment and a wealth of complex behaviors.
... For the first problem, researchers have combined augmented reality (AR) technology with mobile robot research in recent years [2]. Reina et al. [3] proposed the concept "virtual sensors" and presented an AR-based sensing framework to control the swarm e-puck robots. Omidshafiei et al. [4] designed a novel robotics platform for hardware-in-theloop experiments, called measurable augmented reality for prototyping cyber-physical systems (MAR-CPS). ...
Conference Paper
Full-text available
Augmented reality (AR) technology has been introduced into the robotics field to narrow the visual gap between indoor and outdoor environments. However, without signals from satellite navigation systems, flight experiments in these indoor AR scenarios need other accurate localization approaches. This work proposes a real-time centimeter-level indoor localization method based on psycho-visually invisible projected tags (IPT), requiring a projector as the sender and quadrotors with high-speed cameras as the receiver. The method includes a modulation process for the sender, as well as demodulation and pose estimation steps for the receiver, where screen-camera communication technology is applied to hide fiducial tags using human vision property. Experiments have demonstrated that IPT can achieve accuracy within ten centimeters and a speed of about ten FPS. Compared with other localization methods for AR robotics platforms, IPT is affordable by using only a projector and high-speed cameras as hardware consumption and convenient by omitting a coordinate alignment step. To the authors' best knowledge, this is the first time screen-camera communication is utilized for AR robot localization.
... Robotic Arms [21,25,27,33,51,59,79,105,133,136,140,153,182,262,270,282,285,325,326,341,354,358,372] Drones [15,58,76,96,114,171,261,332,382,436,450,451,475,482,494] Mobile Robots [53,91,112,163,169,177,191,197,209,210,212,221,263,305,328,346,370,407,418,445,467,468] Humanoid Robots [14,29,59,158,183,254,262,372,429,439,440] Vehicles [2,7,223,300,314,320,438,458,495] Actuated Objects [116, 117, 127, 159, 167, 243-245, 258, 431, 443, 472, 473] Combinations [29,52,53,66,98,101,117,136,169,177,209,225,327] Other Types [31,65,86,136,206,239,304,337,338,443,476] 1 : 1 [25,30,33,54,67,76,103,118,133,153,216,226,228,251,252,310,313,329,358,361,450,481,482,494] 1 : m [131,142,145,163,176,177,212,235,246,328,416,425] n : 1 [31,112,159,275,317,338,345,364,369,382,430] n : m [200,221,328,408,415,431] Small [86,179,258,328,374,425,443] [ 11,14,18,19,49,55,116,117,127,130,136,163,167,193,221,242,244,245,257,335,340,407,408,429,445,451] [ 29,158,162,191,210,220,229,230,262,272,285,289,315,337,342,346,350,370,372,406,429,464,467] Large [2,7,320,458,475] Near Far ...
Conference Paper
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.
... Robotic Arms [21,25,27,33,51,59,79,105,133,136,140,153,182,262,270,282,285,325,326,341,354,358,372] Drones [15,58,76,96,114,171,261,332,382,436,450,451,475,482,494] Mobile Robots [53,91,112,163,169,177,191,197,209,210,212,221,263,305,328,346,370,407,418,445,467,468] Humanoid Robots [14,29,59,158,183,254,262,372,429,439,440] Vehicles [2,7,223,300,314,320,438,458,495] Actuated Objects [116, 117, 127, 159, 167, 243-245, 258, 431, 443, 472, 473] Combinations [29,52,53,66,98,101,117,136,169,177,209,225,327] Other Types [31,65,86,136,206,239,304,337,338,443,476] 1 : 1 [25,30,33,54,67,76,103,118,133,153,216,226,228,251,252,310,313,329,358,361,450,481,482,494] 1 : m [131,142,145,163,176,177,212,235,246,328,416,425] n : 1 [31,112,159,275,317,338,345,364,369,382,430] n : m [200,221,328,408,415,431] Small [86,179,258,328,374,425,443] [ 11,14,18,19,49,55,116,117,127,130,136,163,167,193,221,242,244,245,257,335,340,407,408,429,445,451] [ 29,158,162,191,210,220,229,230,262,272,285,289,315,337,342,346,350,370,372,406,429,464,467] Large [2,7,320,458,475] ...
Preprint
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.
... For the first problem, researchers have combined augmented reality (AR) technology with mobile robot research in recent years [2]. Reina et al. [3] proposed the concept "virtual sensors" and presented an AR-based sensing framework to control the swarm e-puck robots. Omidshafiei et al. [4] designed a novel robotics platform for hardware-in-theloop experiments, called measurable augmented reality for prototyping cyber-physical systems (MAR-CPS). ...
Preprint
Full-text available
Augmented reality (AR) technology has been introduced into the robotics field to narrow the visual gap between indoor and outdoor environments. However, without signals from satellite navigation systems, flight experiments in these indoor AR scenarios need other accurate localization approaches. This work proposes a real-time centimeter-level indoor localization method based on psycho-visually invisible projected tags (IPT), requiring a projector as the sender and quadrotors with high-speed cameras as the receiver. The method includes a modulation process for the sender, as well as demodulation and pose estimation steps for the receiver, where screen-camera communication technology is applied to hide fiducial tags using human vision property. Experiments have demonstrated that IPT can achieve accuracy within ten centimeters and a speed of about ten FPS. Compared with other localization methods for AR robotics platforms, IPT is affordable by using only a projector and high-speed cameras as hardware consumption and convenient by omitting a coordinate alignment step. To the authors' best knowledge, this is the first time screen-camera communication is utilized for AR robot localization.
... In this way, the perception of the environment by micro-robots with limited sensory capabilities can be enhanced by virtual sensors, allowing researchers to study their movements in more complex situations. In [55], researchers built an MR sensing system for swarm robots, which includes a robot swarm with virtual sensors, an MR platform to complete virtual scenario modeling, and a vision system to collect the real environment information. With this system, researchers were able to control and design the movement and behavior of swarm robots in complex scenarios. ...