Conference Paper

The SWARM-BOTS project

DOI: 10.1007/978-3-540-30552-1_4 Conference: Swarm Robotics, SAB 2004 International Workshop, Santa Monica, CA, USA, July 17, 2004, Revised Selected Papers
Source: DBLP


This paper provides an overview of the SWARM-BOTS project, a robotic project sponsored by the Future and Emerging Technologies
program of the European Commission. The paper illustrates the goals of the project, the robot prototype and the 3D simulator
we built. It also reports on the results of experimental work in which distributed adaptive controllers are used to control
a group of real, or simulated, robots so that they perform a variety of tasks which require cooperation and coordination.

Download full-text


Available from: Thomas Halva Labella,
19 Reads
  • Source
    • "Such a mapping has been used since the advent of evolutionary computation and genetic programming research (Friedberg , 1959), and is the most widely used encoding scheme in the field. Moreover, direct encodings are easy to implement , and have been applied successfully for the evolution of various robot behaviors such as locomotion gaits for multilegged and tensegrity platforms (e.g., Lewis et al. (1992); Téllez et al. (2006); Koos et al. (2013); Iscen et al. (2013)), navigation and obstacle avoidance for wheeled robots (e.g., Nolfi and Floreano (2001)), body-brain evolution in artificial life systems (Hornby and Pollack, 2002), and cooperative foraging in robot swarms (e.g., Waibel et al. (2009); Dorigo et al. (2005); Kernbach (2013)). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Evolutionary computation is a promising approach to autonomously synthesize machines with abilities that resemble those of animals, but the field suffers from a lack of strong foundations. In particular, evolutionary systems are currently assessed solely by the fitness score their evolved artifacts can achieve for a specific task, whereas such fitness-based comparisons provide limited insights about how the same system would evaluate on different tasks, and its adaptive capabilities to respond to changes in fitness (e.g., from damages to the machine, or in new situations). To counter theses limitations, we introduce the concept of "evolvability signatures", which picture the statistical distribution of behavior diversity and fitness after mutations. We tested the relevance of this concept by evolving controllers for hexapod robot locomotion using five different genotype-to-phentoype mappings (direct encoding, generative encoding of open-loop and closed-loop central pattern generators, generative encoding of neural networks, and single-unit pattern generators (SUPG)). We observed a predictive relationship between the evolvability signature of each encoding and the number of generations required by hexapods to adapt from incurred damages. Our study also reveals that, across the five investigated encodings, the SUPG scheme achieved the best evolvability signature, and was always foremost in recovering an effective gait following robot damages. Overall, our evolvability signatures neatly complement existing task-performance benchmarks, and pave the way for stronger foundations for research in evolutionary computation.
    Information Sciences 10/2014; 313. DOI:10.1016/j.ins.2015.03.046 · 4.04 Impact Factor
  • Source
    • "This makes them suitable for jobs such as exploration and foraging [1], [2], [3], construction [4], [5], and fire fighting or HAZMAT situations [6], [7]. Indeed, in the recent years, we have seen swarms move from a theoretical possibility to systems implemented on real robots in laboratory settings, such as those in [8], [9], [10]. While there are some cases where swarms might act with full autonomy, many tasks require some sort of coordination between a human operator and the swarm. "
    [Show abstract] [Hide abstract]
    ABSTRACT: As swarms are used in increasingly more complex scenarios, further investigation is needed to determine how to give human operators the best tools to properly influence the swarm after deployment. Previous research has focused on relaying influence from the operator to the swarm, either by broadcasting commands to the entire swarm or by influencing the swarm through the teleoperation of a leader. While these methods each have their different applications, there has been a lack of research into how the influence should be propagated through the swarm in leader-based methods. This paper focuses on two simple methods of information propagation-flooding and consensus-and compares the ability of operators to maneuver the swarm to goal points using each, both with and without sensing error. Flooding involves each robot explicitly matching the speed and direction of the leader (or matching the speed and direction of the first neighboring robot that has already done so), and consensus involves each robot matching the average speed and direction of all the neighbors it senses. We discover that the flooding method is significantly more effective, yet the consensus method has some advantages at lower speeds, and in terms of overall connectivity and cohesion of the swarm.
    Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics; 10/2013
  • Source
    • "In other words, agents obtain a high payoff by coordinating their actions, but may end up in a suboptimal situation when taking the safest bet. As such, the stag-hunt game represents, in a simplified manner, coordination problems like those encountered in swarm robotics (Dorigo et al., 2005) or the evolution of language (Christiansen & Kriby, 2003). Swarm robotics research investigates how the collective "
    [Show abstract] [Hide abstract]
    ABSTRACT: Designing an adaptive multi-agent system often requires the specification of interaction patterns between the different agents. To date, it remains unclear to what extent such interaction patterns influence the dynamics of the learning mechanisms inherent to each agent in the system. Here, we address this fundamental problem, both analytically and via computer simulations, examining networks of agents that engage in stag-hunt games with their neighbors and thereby learn to coordinate their actions. We show that the specific network topology does not affect the game strategy the agents learn on average. Yet, network features such as heterogeneity and clustering effectively determine how this average game behavior arises and how it manifests itself. Network heterogeneity induces variation in learning speed, whereas network clustering results in the emergence of clusters of agents with similar strategies. Such clusters also form when the network structure is not predefined, but shaped by the agents themselves. In that case, the strategy of an agent may become correlated with that of its neighbors on the one hand, and with its degree on the other hand. Here, we show that the presence of such correlations drastically changes the overall learning behavior of the agents. As such, our work provides a clear-cut picture of the learning dynamics associated with networks of agents trying to optimally coordinate their actions.
    Adaptive Behavior 10/2010; 18(5):416-427. DOI:10.1177/1059712310384282 · 0.86 Impact Factor
Show more