Conference PaperPDF Available

A Brief Investigation of Swarm Theory and Applications

Authors:
  • DER Security Corp

Abstract and Figures

For the last half century scientists have been discovering the biological complexities of colonies of ants, termites, bees and other insects. Although these colonies are composed of individuals with limited physical and intellectual aptitude, the behavior of the system as a whole displays highly adaptive and intelligent behavior. As a result, in the last two decades, engineers have been pursuing methods to create artificial swarm intelligence and applying these concepts of complex swarming systems to traditional and novel engineering areas such as robotics, optimization algorithms, wireless networks, and military applications. In this paper, an overview of swarm theory research is provided, followed by a more in depth demonstration of swarm behaviors including swarm clustering, formation control and swarm motion.
Content may be subject to copyright.
1 Copyright © 2009 by ASME
Proceedings of IDETC/CIE 2009
ASME 2008 International Design Engineering Technical Conferences &
Computers and Information in Engineering Conference
August 30-September 2, 2009, San Diego, CA, USA
DETC2009-86525
A BRIEF INVESTIGATION OF SWARM THEORY AND APPLICATIONS
Jay T. Johnson
jay.johnson@gatech.edu
The G. W. Woodruff School of Mechanical Engineering
Georgia Institute of Technology
Atlanta, Georgia 30332
ABSTRACT
For the last half century scientists have been discovering the
biological complexities of colonies of ants, termites, bees and
other insects. Although these colonies are composed of
individuals with limited physical and intellectual aptitude, the
behavior of the system as a whole displays highly adaptive
and intelligent behavior. As a result, in the last two decades,
engineers have been pursuing methods to create artificial
swarm intelligence and applying these concepts of complex
swarming systems to traditional and novel engineering areas
such as robotics, optimization algorithms, wireless networks,
and military applications. In this paper, an overview of swarm
theory research is provided, followed by a more in depth
demonstration of swarm behaviors including swarm
clustering, formation control and swarm motion.
KEYWORDS
Swarm Theory, Complexity, Complex Adaptive Systems
1 INTRODUCTION
Swarm theory originated from the studies of colonies of ants,
termites and other insects. Kenneth Lodding provides an
excellent recapitulation of the history of computational
biomimicry in [1], where he explains that early work in
biology and entomology inspired the current work of
engineers, economists, and computer scientists. Specifically,
Lodding notes that the studies of Pierre-Paul Grassé on
termites [2] and Jean-Louis Deneubourg on ants [3] inspired
Santa Fe Institute researchers Bonabeau, Dorigo, and
Theraulaz to create a comprehensive book on swarming
insects [4], which elaborates on “the emergent problem-
solving capabilities of large groups of simple, interacting
agents, such as ants, bees, termites, and wasps” [1]. John
Holland identified multiple characteristics of complex
adaptive systems, which like biomorphic colonies, are
composed of autonomous interacting agents. He found that
these systems generally exhibited the behaviors of collective
interaction, autonomous action, emergence, adaptation, and
evolution [5]. Furthermore, Murthy and Krishnamurthy
compared multi-agent system networks to Nature-Inspired
Smart Systems and discovered overarching properties of fault
tolerance and resiliency to damage [6]. Due to these attractive
characteristics of biological systems, swarm applications have
been steadily growing since Craig Reynolds first demonstrated
swarm theory with the bird-like flocking behavior of Boids in
1987 [7].
Swarm tools achieve robustness by exploiting the
dynamic, self-organizing characteristics of colonies to produce
agent-based intelligence. Early pioneers in this area were
surprised that such valuable behavior was inherent in systems
with agents obeying elementary rules—and yet it is the
simplicity of the system which allows macro-scale robustness
to emerge. Essentially, there are fewer components and
interactions that can fail. Consequently, the key advantage of
using a decentralized swarm control system is that the
intelligence is dispersed over the entire group and a
malfunction of one agent does not cause parallel (and
therefore catastrophic) loss of system functionality.
The swarm gains its intelligence through interactions
between agents and the environment. The challenge is how to
artificially recreate rules of interaction seen in nature to solve
engineering problems like group organization and formation,
moving, tracking, and obstacle avoidance. Currently, agent
interactions in swarm models use different rules to dictate the
direction and magnitude of agent motion, e.g., when far from
the group, move toward the other agents; when close to the
group, move away from the other agents. Often these
governing rules are represented by functions derived from
Newtonian dynamics of point masses, as in [8], but physics-
based functions aren’t universally employed because there are
benefits to using specifically crafted agent equations of
motion. In [9-11], Gazi and Passino introduce and
demonstrate their agent velocity function which uses a well-
2 Copyright © 2009 by ASME
balanced combination of one linear attraction term and one
nonlinear repulsion term to create a spatial velocity gradient
and equilibrium distance between agents. This type of
governing equation has the advantage that the parameters can
be adjusted for individual agents, and therefore different
swarm formations can be constructed with the agents. Gazi
and Passino demonstrate this ability with the formation of
circles, spheres and triangular formations using their velocity
function.
The functions which keep the agents in a formation
do not allow them to collide with one another, but they also do
not provide a means for the swarm to move. When a swarm
organizes its agents into a formation, the center of mass of the
agents remains unchanged because the agents have balanced
attraction and repulsion forces. Therefore, to move the swarm
as a whole, additional forces must be applied. Different
methods can accomplish this task. Gazi and Passino use global
velocity gradients to drive all the swarm agents uniformly [9].
Another approach is taken in this paper in which a leader
agent navigates within the environment while the swarm
retains its structure via inter-agent attraction/repulsion forces.
Both of these methods are appropriate for different situations
but both require knowledge of the environment. For instance,
to create a gradient toward a target or away from a threat, the
location of the target or threat must be known and
continuously updated. One advantage of using a leader is that
the swarm motion can incorporate a human-in-the-loop to
guide the swarm by driving the leader remotely. Although
some may argue this removes the natural robustness of the
swarm, the control could easily be transferred to different
agents in the event of a failure of the leader.
The goal of this paper is to point out the wide-range
of swarm applications and exemplify the ease of creating a
simulation to accomplish seemingly sophisticated tasks. The
paper is broken into two parts. Section 2 investigates the
diverse application areas of swarm theory, although it is not
intended to be a comprehensive literature survey. The section
is meant to introduce the reader to the types of problems
which are being tackled by swarm tools. In Section 3 the tasks
of swarm formation control, motion and obstacle avoidance
are presented to demonstrate how these practical behaviors
can be generated with particle swarm agent-based models.
This section extends the work of Gazi and Passino with a
novel follow-the-leader motion generation technique and
discussion of how to tailor agent parameters to particular
problems. Lastly, an error function is created and studied with
respect to leader velocity and swarm parameters.
2 CURRENT SWARM APPLICATIONS
Particle swarm models are simple to construct but they yield a
wealth of behavioral richness. Since the foundation of the
field with the Boids simulation more than 20 years ago,
researchers have been studying the powerful properties of
emergent agent-based systems using simulations. Researchers
are currently applying swarm tools to optimization,
autonomous robot formation, tracking, military operations,
team control, and virtual world navigation.
Engineers have been eager to incorporate swarming
algorithms into distributed robot systems after the robustness
principles of these systems were discovered. Unfortunately,
real systems are limited by mobility, agility, communication
rates and must overcome mechanical noise and networking
interferences. However, in centralized control approaches the
entire system fails if one agent malfunctions or is destroyed,
so by distributing the intelligence to all the agents, the group
as a whole is a highly robust entity. For this reason, the US
military is particularly active in investigating autonomous and
semi-autonomous swarm systems. Abatti at the Maxwell Air
Force Base takes a detailed look at Unmanned Aerial Vehicles
(UAVs) in relation to military needs in [12]. Likewise, Dickie
creates a simulation tool for swarms of autonomously-acting
military robots, in which the Multi-Agent Robot Swarm
Simulation uses state sensing and behavioral tools to allow a
modeler to control a number of complex agents and their
interactions [13]. Dickie also simulates different war-time
situations such as a reconnaissance mission where Micro Air
Vehicles hunt for mobile tanks. Similarly, Baldwin
investigates the scenario of having hundreds or thousands of
UAVs performing Intelligence, Surveillance and
Reconnaissance (ISR) missions in [14].
Other researchers are working on robot com-
munication and interaction. Dorigo, Tuci and other researchers
have developed swarm-bots, which are small autonomous
robots capable of self-organization and self-assembling into
different formations [15, 16]. These robots have a simple
circular housing and a gripper arm. They move with tank-like
tracks and have the capability of latching on to other swarm-
bots to form chains or other structures to complete collective
objectives. For instance, swarm-bots have the ability to grab
each another to cross a trough that a single robot would not be
able to bypass alone. Furthermore, Dorigo shows that using
exploration and path formation algorithms, robots can
communicate and transport an object to a designated area.
Similarly, Baldassarre, Parisi, and Nolfi present a model of a
self-organizing system of connected robots which can
coordinated their efforts to perform motion, obstacle
avoidance, and search out a light source [17].
Swarm theory is also applied in object tracking.
Gechter et al. created a physics-based simulation of target
tracking and applied the control logic to a multi-robot setup
[18]. The authors tested the swarm tracking algorithm by
tracking a mini soccer ball in the RoboCup—a robotic world
cup. They used an agent-based model to track the ball but
incorporated inter-agent attraction/repulsion forces to keep
teammates from interfering with each other.
Agent-based models often recreate how ants forage
for food to find the optimal route. Ants lay pheromones down
along their path, so over time more ants follow the shortest
path to the food and it becomes pheromone-heavy. Small
deviations from the original path optimize the route as more
ants will eventually take the shortcut. This has inspired ant
colony optimization [19], as well as acting as a starting point
for a number of pheromone-based models. For instance,
Panait and Luke have created two pheromone utility models
3 Copyright © 2009 by ASME
j
x
for foraging, trail creation, and other tasks in [20]. Tinkham
and Menezes use StarLogo to simulate coordinated robots
required to find their way out of a room with a door [21]. In
the simulations they use a pheromone-based method to
instruct agents where the exit can be found based on the
success of other agents. Sauter et al. use synthetic pheromones
in military applications for unmanned robotic vehicles [22].
They found that by tuning the algorithms, the robots can be
controlled to dynamically find targets and create safe routes
around dangerous locations.
Other researchers such as Yoon and Maher, rely on
swarm intelligence to determine optimal paths in dynamic
virtual worlds [23]. By allowing a swarm to search different
paths to a desired destination, computer users in the world will
no longer need navigational strategies. By implementing a
wayfinding tool, the authors describe how the swarm can
explore the world and be used for continuous path
optimization as the world changes. At this point the
technology only exists in virtual worlds, but the authors point
out that future swarms may assist real-world, real-time
navigation.
Harmann [24] looked at swarm formations including
clustering, patch sorting, and annular sorting. In these
experiments, it is no longer the objective to spatially distribute
the agents, but rather pack them as tightly as possible.
Homogenous swarm agents have been shown to do this in
[10] and are studied further here using three different agent
types on a rectilinear grid. Chen and Meng investigate an
incremental clustering algorithm for agents in 3-dimentional
space [25]. Generally clustering is an a priori task, however,
Chen and Meng use data mining techniques to optimize their
swarm motion given the memories of the agents.
It should also be noted that major advancements in a
number of fields beyond those shown here can be attributed to
swarm theory. The applications in this section represent only a
cross-section of the problem domain where swarm principles
are currently assisting researchers. For a more comprehensive
survey of multi-agent concepts, e.g. swarm applications,
reinforcement learning, evolutionary computation, complex
systems, agent modeling, and robotics, see the work by Zohdi
[8] and Panait and Luke [26].
3 SWARM STUDIES
Although swarm applications are diverse and often applied to
complex problems, the fundamental theory for their operation
is simple. The goal of this section is to demonstrate swarm
formation control, movement and obstacle avoidance using
independent but interacting agents.
Gazi and Passino discuss the merits of their equation
of motion, explore the stability of the solutions, and create
swarm motion using a planar gradient in [9]. Using this as a
starting place, Gazi and Passino’s work will be analyzed and
then extended to demonstrate obstacle avoidance with a
follow-the-leader motion control technique. This incorporates
the gradient-based obstacle avoidance studied in [8] and [9]
while controlling the motion of the swarm with a follow-the-
leader approach.
3.1 Equations of Motion
To demonstrate simple swarming algorithms and
behaviors, the control law introduced by Gazi and Passino is
investigated. The velocity for each agent, i, in an M-size
swarm is given by the sum of forcing functions in Eq. (1).
()
1,
M
ii
jji
g
=≠
=−
xx
& (1)
xk is an n-dimensional position vector of agent k and the
forcing function between agents is
2
() expgab
c
⎛⎞
=
−− ⎜
⎜⎟
⎝⎠
y
yy (2)
where y is the Euclidean norm of y. For the parameter set
{a = 1, b = 20, c = 0.2}, this function is shown in Figure 1.
Figure 1: The g function for different distances.
This function is specifically crafted to balance attraction and
repulsion interactions and provide a stable equilibrium
distance for the agents. As Gazi and Passino demonstrated,
this can be shown by breaking the equation into attraction and
repulsion terms, given by ga and gr in Eqs. (3)-(5).
() ()
() ar
ggg=− −
yyy y
(3)
()
2
exp
a
gb c
⎛⎞
=
⎜− ⎟
⎜⎟
⎝⎠
y
y (4)
(
)
r
ga=y (5)
4 Copyright © 2009 by ASME
In this equation, there is a point at which the attraction and
repulsion forces are equivalent. Solving the above equation
leads to the equilibrium separation distance in Eq. (6).
ln
eq
a
cb
δ
⎛⎞
== ⎜⎟
⎝⎠
y (6)
For agent separation distances
δ
>y,
(
(
ar
gg>yy
and
for
δ
<y, . So, the attraction term dominates
the function at large distances, whereas the repulsion term
dominates once the distance is less than δ, shown in Figure 2.
() ()
ar
gg<yy
-5 -4 -3 -2 -1 0 1 2 3 4 5
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8 Attraction and Repulsion Terms
Distance Between Agents (Euclidean Norm)
Attraction/Repulsion Velocities of Agents
Total Function
Attraction
Repulsion
Figure 2: Attraction-Repulsion Interactions.
Since the attraction term grows linearly with agent separation
distance, if a time-stepping routine is used, large time steps
cause simulation instability. This can be partially avoided by
decreasing the attraction term in Eq. (2), but for forward Euler
approximations, the time step must be selected carefully. For
instance, if there are M agents, and one is positioned
reasonably far away from the other agents, the change in
position of this agent in one time step can be approximated by
1,
M
ij step
jji
at
=≠
Δ≈ ⋅
xy
(7)
So, if in one time step the agent is relocated to position farther
from the swarm than it currently is, the agent will continue to
progressively make larger spatial jumps and the simulation
will diverge.
Using a time step of 0.05 for the simulations, the
swarm clustering can be shown for 2- and 3-dimensional
spaces below. The stable equilibrium solutions are circular or
spherical formations shown in Figure 3 and Figure 4. In these
simulations the blue dots are initial positions chosen randomly
in a n
R hypercube with side lengths 8 units long. The black
012345678
0
1
2
3
4
5
6
7
8
X Coordinate
Y Coordinate
Figure 3: 2D 20-agent clustering simulation.
0
2
4
6
802468
0
2
4
6
8
Y CoordinateX Coordinate
Figure 4: 3D 20-agent clustering simulation.
dots represent the path that the agents take to reach the final
location, indicated with a red dot.
Surrounding each of the agents there is a gradient
which determines the motions of the other agents. To give
more intuitive understanding of this principle, a surface S can
be created which demonstrates the applied attraction/repulsion
potential on the other agents. The surface S has the
characteristic that for every point the gradient is proportional
to the velocity imposed on the surrounding agents. In other
words, if an agent were a ball on S, the instantaneous velocity
of any agent is the slope of the surface. Since the surface is
only a function of the magnitude of the separation distance,
the surface can be found by integrating the 1-dimensional g
function with respect to distance and changing the sign so that
the equilibrium distance is associated with the minimum
potential energy of the agent. This derivation of the surface is
shown in Eqs. (8) and (9).
5 Copyright © 2009 by ASME
()
2
exp
dS y
g y ay by
dy c
⎛⎞
=− = −
⎜⎟
⎝⎠
(8)
()
2
2exp
22
abc y
Sgydyy c
⎛⎞
=− = +
⎝⎠
(9)
Extending the surface to 2-space, the result of this integration
is shown in Figure 5 with the agent located at (20,20).
Figure 5: Depiction of the instantaneous inter-agent
attraction/repulsion forces caused by an agent at (20,20).
0 1 2 3 4 5 6 7 8
0
1
2
3
4
5
6
7
8Swarm Model
X Coordinate
Y Coordinate
Figure 6: Concentric circles of swarm agents.
It should be noted that for some sets of parameters
the swarm does not form a hypersphere but concentric shells
instead. For instance, with parameter set {M = 30, tstep = 0.05,
a = 0.15, b = 3, c = 0.1} the result is concentric circles of
agents, shown in Figure 6. The concentric circle formation
results from another equilibrium state for the agents. In this
configuration the summation of the inter-agent velocities are
nullified for all the agents; thus, the swarm is at steady state.
This means that there is no guarantee that the agents will form
the desired formation and the formation is purely dependent
on initial conditions. Those agents who are close to the center
of mass of the swarm during initialization will likely become
trapped in one of the smaller concentric circles, whereas
agents toward the periphery will likely become part of the
outer ring.
3.2 Motion and Follow-the-leader Simulations
The swarm can be given motion in two different ways. The
first is by imposing the typical attraction/repulsion agent
forces and then adding an additional motion term to the
velocity function of all of the agents of the swarm. This results
in a positional shift for the entire swarm during each time step.
In [9], Gazi and Passino take this approach and create motion
by creating a global velocity gradient. The second swarm
motion option, and the one which will be studied here, is to
command one of the agents either with an additional velocity
term or directly. By adding a bias term to one agent the entire
swarm will compensate and chase that agent. Another option
is to remove the effects of inter-agent forces on a single agent
and then prescribe a path for this leader agent. This could
model a group of UAVs with the lead vehicle operated
remotely with a human controller. The remaining agents in the
swarm follow the leader agent while retaining their formation.
After experimenting with a leader agent including
and excluding the inter-agent forces, it was determined that
having an independent leader agent gave better control over
the swarm; in the former method the inter-agent forces were
not predictable and generated guidance challenges, i.e., the
interaction forces would cause the leader to veer off course.
In Figure 7, the swarm is randomly instantiated in an 8 x 8
area in the lower left corner of the plane. Then the leader
agent moves in the +x, +y, +x, and -y directions while the
swarm follows in formation. Note, for computational and
demonstrative purposes, not every timestep is shown for the
simulation. The initial formation and each of the 90º turns are
shown in more detail. Notice that the swarm agents have
difficulty keeping up with the leader agent in this simulation.
In Figure 8 this issue is remedied by changing the a term in
the equation of motion in Eq. (2). This means the swarm is
more tightly packed and also more capable of tracking the
leader. By comparing Figure 7 with Figure 8, the change is
apparent in the tighter corners and the final agent locations. It
should be noted that on the 3rd turn of these simulations, the
leader agent stops moving temporarily to allow the agents to
reconfigure their structure. This explains the difference in
turning radius of the swarms in those locations.
3.3 Formation Control Studies
By assigning different velocity functions between the agents
(the g(y)’s in Eq. (2) are not the same for all agents), the
equilibrium distance between agents, δ, can be customized to
6 Copyright © 2009 by ASME
Figure 7: Swarm motion with a small attraction term. The
leader agent is significantly far ahead of the other agents,
indicating the attraction term does not match the velocity of the
leader well.
Figure 8: Swarm motion with a large attraction term.
produce different configurations. In Figure 9, the distances
between the agents are prescribed such that they form a
triangle with sides of length 2. To do this, the δ’s are selected
to be 1, 3, and 2 depending on the i and j agents.
This swarm configuration can also be driven by
assigning one agent a prescribed motion. In the case shown in
Figure 10, the driver agent is chosen to be a vertex of the
triangle and the agent moves to the right at a constant velocity
while the other agents are pulled to the right by their attraction
to the leader.
By driving the triangular swarm in more complex
motions, interesting behavior can be noticed. In Figure 11, the
triangle is heavily distorted and the agents have fallen
disproportionately out of formation. This is an unfortunate
byproduct of formation control: the attraction terms are
adjusted for each of the inter-agent attractions according to the
-1 0 1 2 3 4 5
-1
0
1
2
3
4
5
X Coordinate
Y Coordinate
Figure 9: Creating a formation using distances of 1, 3, and 2.
Figure 10: Simple formation creating and motion.
desired equilibrium distance. The reason that the agents do not
follow the leader at the same rate is that the attraction term for
each of the agents is found individually based on
()
2
exp /ab c
δ
=⋅ − (10)
For instance in the plot in Figure 11, the parameter set was {b
= 4, c = 0.4, tstep = 0.05, Δxdriver/step = 0.1}, which led to
smaller attractions terms in agents 3, 4 and 5 with respect to
agent 1, shown in Table 1. The agent numbering is shown in
Figure 12. This problem could be corrected by decreasing the
rate of the leader agent or increasing the attraction terms in the
agents, which would unfortunately also adjust the equilibrium
distances unless the c and/or b terms were altered as well.
7 Copyright © 2009 by ASME
Agent 1
(driver)
Agent 2
Agent 3
Agent 4
Agent 6
Agent 5
Figure 12: The swarm agent formation.
3.4 Obstacle Avoidance
One benefit of creating attraction/repulsion rules for a swarm
of agents, robots or UAVs is the simplicity in adding new
rules for additional constraints. Repulsion walls or boundaries
can be used to direct the motion of the swarm away from
obstacles, boundaries or other locations with known dangers
like ravines, quicksand, or landmines. By defining permanent
artificial repulsion-only agents in specific locations, the
swarming agents naturally avoid those places. A simple
obstacle course was created to depict a swarm moving through
a forest in Figure 13. In this case circular pockets of repulsion
were created at the center of the trees to direct the swarm
away from the obstacles. The magnitude of the tree repulsion
was adjusted through a series of simulations until the agents
maneuvered past the obstacles with sufficient clearances. In
these simulations the driver agent was told to move in the +x
direction at a constant speed, but to still obey the repulsion
effects from the obstacles. As a result, Agent 1 moves to the
right except when in the vicinity of the obstacles, where the
agent redirects the motion of the group away from the
problematic region. The other agents are also repulsed by the
Figure 11: More complicated formation motion. The agents are
not equally out of formation because their interaction velocity
functions are unique.
Table 1: The attraction terms for different inter-agent functions
in the triangular formation.
Agent 1 Agent 2 Agent 3 Agent 4 Agent 5 Agent 6
Agent 1 0 0.32834 0.000182 0.002212 0.000182 0.32834
Agent 2 0.32834 0 0.32834 0.32834 0.002212 0.32834
Agent 3 0.000182 0.32834 0 0.32834 0.000182 0.002212
Agent 4 0.002212 0.32834 0.32834 0 0.32834 0.32834
Agent 5 0.000182 0.002212 0.000182 0.32834 0 0.32834
Agent 6 0.32834 0.32834 0.002212 0.32834 0.32834 0
(a) A triangular swarm formation navigating through a forest by creating steep repulsion gradients at the locations of trees.
(b) A closer view of the agents avoiding the trees.
Figure 13: Obstacle avoidance by using localized repulsion gradients.
8 Copyright © 2009 by ASME
regions around the trees, so none of the agents run the risk of
impacting the obstacles. Using this technique of gradient-
based swarm motion augmentation, modelers or the swarm
agents themselves can create repulsion boundaries or regions
of attraction to direct the motion of the swarm—much like
pheromones are used by ant colonies to direct the motion of
the natural swarm.
3.5 Error Studies
Lastly, to determine how well the swarm parameters are
performing and determine an appropriate maximum speed for
the leader, simulations were run to check the error in the final
positions of agents.
A simulation was created where a swarm at
equilibrium has the driver agent move at speeds from 0.02 to
1.00 unit/s in the +x-direction for 25 s while the other agents
in the formation attempt to stay in equilibrium. With the driver
starting at (0,0), the final formations of the agents for different
speeds are plotted together in Figure 13. The error for these
simulations was calculated by taking the desired equilibrium
normal distance between agents and subtracting the simulated
normal distances after 25 seconds. These errors were then
averaged for all the agents to find a normalized error for the
agents as shown in Eq. (11). The error increases with
increasing leader speed as shown in Figure 14.
()
1ij
ij
Error M
δ
=−
∑∑ xx
(11)
Figure 13: Formations for different leader speeds. Leftmost is
0.02 unit/s and rightmost is 1.00 unit/s. Note agent 4 is left
unconnected for purposes of illustration.
Figure 14: Error vs leader speed with parameter set {tstep = 0.05,
Total Time = 25 s, b = 4, c = 0.4}
The error can be further dissected to show the
individual agent’s influence on the error in the formation.
This is shown in Figure 15. From these plots it is clear the
majority of the error is due to the swarm being unable to
maintain a close formation with the leader (Agent 1). As
shown in Figure 13, when the leader gets farther ahead of the
others the triangular formation compresses vertically because
the attraction in the direction of Agent 1 increases linearly
with separation distance, and the vertical component of that
vector drives the agents toward the x-axis. As a result the
formation as a whole squeezes vertically and the errors
between the remaining agents also increase.
In order to reduce the error in the model and remove
the runaway leader effect, the b value was increased to 10 for
all agents. Recall the attraction term is derived for formation
control simulations in Eq. (10). The error results from this
simulation were significantly smaller because the attraction
force was greater in the swarm (shown in Figure 16), which
provided the agents more attraction force and the power to
stay with the leader agent, shown in Figure 17. For this
parameter set there is little vertical deformation in the swarm
for the same set of leader velocities, which is reflected in the
error results in Figure 18.
Figure 15: The error from each of the swarm agents.
Figure 16: Original and updated forcing functions.
9 Copyright © 2009 by ASME
Figure 17: Results from augmenting the gij function with leader
speeds 0.02 to 1.00 unit/s.
Figure 18: Error from the new function.
Although from this simulation it would seem that
continuing to increase the attraction term would lead to
increasing the swarm performance, by increasing the a term
there are other behavioral sacrifices such as settling time and
overshoot that must be taken into account. In addition, there
are more realistic limitations to how fast and accurately the
agents can move and accelerate in noisy environments. Also,
the stability of the swarm simulation could come into play as
the equation of motion is modified. These topics are important
for swarm applications and should be investigated in the
future.
4 CONCLUSIONS
Originating from biological systems, the field of swarm theory
has emerged as a powerful tool in science and engineering.
Complexity theorists have shown that simple agent rules and
interactions can result in highly adaptive, sophisticated system
behavior. This concept has lead to the development of
artificial swarm systems which have spurred the research
areas of packing, foraging, clustering, obstacle avoidance, and
tracking in robot collectives. In swarm theory, governing
equations of motion provide distributed, autonomous control
of the system of multi-robot systems.
This paper demonstrates and extends swarm
applications based on the work of Gazi and Passino, as well as
discusses areas of difficulty and further study. The cases of
circular, spherical and triangular formation control are
demonstrated. It was also shown here that obstacle avoidance
is a simple problem and would be easily extended to different
geometries, swarm formations and requirements. Motion
using a follow-the-leader method is demonstrated and the
error associated with prescribing the leader a given velocity is
studied. The error analysis has future applications in
adaptively controlling the attraction/repulsion functions of
agents. If high velocities were desired the attraction a term
could be ramped up, but when the agents were in situations
requiring precise formations, such as locations with obstacles
or during combat, the a term could be decreased. There are a
number of future studies that should be considered in this area
to investigate swarm settling time or overshoot issues.
Additionally, modeling the mass of the agents would require
the agents control inertial bodies and therefore provide more
realistic results of physical systems.
In the future, as the field of swarm theory continues
to grow, more problems will be solved with distributed control
and swarm intelligence methodologies. It will take time for
scientist and engineers to become aware of all the applications
of swarm tools, but as the advantages to such techniques
emerge, greater emphasis will be placed on implementation of
these agent-based solutions.
5 ACKNOWLEDGMENTS
I would like to thank Todd Siflett for introducing me to swarm
theory and Dr. Michael Leamy for his guidance.
REFERENCES
[1] Lodding, K.N., The Hitchhiker's Guide to Biomorphic
Software, Queue, v.2 n.4, June 2004.
[2] Grasse, P. P. La reconstruction du nid et les coordinations
inter-individuelles chez Bellicositermes natalensis et
Cubitermes sp. La theorie de la stigmergie: Essai
d’interpretation des termites constructeurs. Insect
Societies 6 (1959), 41–83.
[3] Deneubourg, J. L, Goss, S., Franks, N. R., Sendova-
Franks, A., Detrain, C., and Chretien, L. The Dynam-
ics of Collective Sorting: Robot-like Ant and Ant-
like Robots. In Simulation of Adaptive Behavior:
From Animals to Animats. Meyers, J. A., and
Wilson, S.W., eds. MIT Press/Bradford Books,
Cambridge: MA (1990), 356–363.
[4] Bonabeau, E., Dorigo, M., and Theraulaz, G. Swarm
Intelligence: From Natural to Artificial Systems.
Oxford University Press, New York: NY, 1999.
[5] Holland, J., 1993, Hidden Order: How Adaptation Builds
Complexity. Addison-Wesley Pub., Menlo Park, CA.
[6] Murthy, V. K., & Krishnamurthy, E. V., 2007, Interacting
Agents in a Network for in silico Modeling of
Nature-Inspired Smart Systems, Computational
Intelligence for Agent-based Systems, pp. 177- 231.
[7] Reynolds, C. W. Flocks, Herds, and Schools: A Distrib-
uted Behavioral Model. Computer Graphics 21, 4
(1987), 25–34.
[8] T.I. Zohdi, 2009. Mechanistic Modeling of Swarms,
Comput. Methods Appl. Mech. Engrg.
10 Copyright © 2009 by ASME
[9] Gazi, V., Passino, K.M., 2004, Stability Analysis of Social
Foraging Swarms, IEEE Transactions on Systems,
Man, and Cybernetics vol.34, pp. 539-557.
[10] Gazi, V., Passino, K.M., "A class of attraction/repulsion
functions for stable swarm aggregations,"
Proceedings of the 41st IEEE Conference on
Decision and Control, 2002, vol.3, pp. 2842-2847,
10-13 Dec. 2002.
[11] Gazi, V., Passino, K.M., "Stability analysis of swarms,"
IEEE Transactions on Automatic Control, vol. 48,
no. 4, pp. 692-697, April 2003.
[12] Abatti, J. M., 2005, Small Power: The Role of Micro and
Small UAVs in the Future, Maxwell Air Force Base,
Master’s Thesis, Alabama.
[13] Dickie, A., 2002, Modeling Robot Swarms Using Agent-
Based Simulation, Naval Postgraduate School,
Master’s Thesis, Monterrey, CA.
[14] Baldwin, P.D., 2005, Modeling Information Quality
Expectation in Unmanned Aerial Vehicle Swarm
Sensor Databases, Air Force Institute of Technology,
Master’s Thesis, Wright-Patterson AFB, OH.
[15] Dorigo, M., "SWARM-BOT: an experiment in swarm
robotics," Proceedings 2005 IEEE Swarm Intelli-
gence Symposium, 2005. SIS 2005., pp. 192-200, 8-
10 June 2005.
[16] Tuci, E., Gross, R., Trianni, V., Mondada, F., Bonani, M.,
and Dorigo, M. 2006. Cooperation through self-
assembly in multi-robot systems. ACM Trans. Auton.
Adapt. Syst. 1, 2 (Dec. 2006), 115-150.
[17] Baldassarre, G., Parisi, D., and Nolfi, S. 2006. Distributed
Coordination of Simulated Robots Based on Self-
Organization. Artif. Life 12, 3 (Jul. 2006), 289-311.
[18] Gechter, F., Chevrier, V., and Charpillet, F. 2006. A
reactive agent-based problem-solving model:
Application to localization and tracking. ACM Trans.
Auton. Adapt. Syst. 1, 2 (Dec. 2006), 189-222.
[19] Dorigo, M., 1992. Optimization, Learning and Natural
Algorithms, PhD thesis, Politecnico di Milano, Italy.
[20] Panait, L., Luke, S., "A pheromone-based utility model
for collaborative foraging," Autonomous Agents and
Multiagent Systems, 2004. AAMAS 2004. pp. 36-43,
2004.
[21] Tinkham, A. and Menezes, R. 2004. Simulating robot
collective behavior using StarLogo. In Proceedings
of the 42nd Annual Southeast Regional Conference
(Huntsville, Alabama, April 02 - 03, 2004). ACM-SE
42. ACM, New York, NY, 396-401.
[22] Sauter, J. A., Matthews, R., Van Dyke Parunak, H., and
Brueckner, S. 2002. Evolving adaptive pheromone
path planning mechanisms. In Proceedings of the
First international Joint Conference on Autonomous
Agents and Multiagent Systems: Part 1 (Bologna,
Italy, July 15 - 19, 2002). AAMAS '02. ACM, New
York, NY, 434-440.
[23] Yoon, J. S. and Maher, M. L. 2005. A swarm algorithm
for wayfinding in dynamic virtual worlds. In
Proceedings of the ACM Symposium on Virtual
Reality Software and Technology (Monterey, CA,
USA, November 07-09, 2005). VRST '05. ACM,
New York, NY.
[24] Hartmann, V. 2005. Evolving agent swarms for clustering
and sorting. In Proceedings of the 2005 Conference
on Genetic and Evolutionary Computation
(Washington DC, USA, June 25-29, 2005). H. Beyer,
Ed. GECCO '05. ACM, New York, NY, 217-224.
[25] Chen, Z., Meng, Q., "An incremental clustering algorithm
based on swarm intelligence theory," Machine
Learning and Cybernetics, 2004, vol.3, pp. 1768-
1772 vol.3, 26-29 Aug. 2004.
[26] Panait, L. and Luke, S. 2005. Cooperative Multi-Agent
Learning: The State of the Art. Autonomous Agents
and Multi-Agent Systems 11, 3 (Nov. 2005), 387-434.
... Additionally, ants demonstrate a type of distributed problem-solving referred to as swarm intelligence (Johnson, 2009). When confronted with a complex task, such as locating the quickest route to a food source or constructing a nest, ants can collaboratively resolve the issue by decomposing it into smaller, manageable sub-tasks (Kube & Bonabeau, 2000). ...
... The decentralized decision-making of ants provides significant insights for human communities, in addition to its practical advantages (Dussutour & Nicolis, 2013). Researchers have developed methods and models applicable to several domains, including optimization, robotics, and computer science, by examining the factors that govern ant behavior Detrain & Deneubourg, 2002;Dorigo et al., 2000;Johnson, 2009). The efficacy of ant colonies in addressing intricate problems via decentralized decision-making exemplifies how distributed systems can surpass centralized systems in some contexts. ...
... To comprehend the relationship between swarm robotics and the collective intelligence of ants, it is essential to identify the fundamental traits that contribute to ants' efficacy as collaborative workers (Bian & Tian, 2022;Johnson, 2009). Ant colonies exhibit remarkable organization, with each ant assigned a distinct task in the colony's general operation. ...
Article
Full-text available
Ants, albeit appearing diminutive and trivial within the broader context of nature, demonstrate an extraordinary capacity for collective problem-solving and exhibit intellect that beyond expectations for individual members of their species. This phenomenon, termed ants’ collective intelligence, has elicited attention and appreciation from both experts and enthusiasts. This article examines the complex mechanisms of ants’ collective intelligence, highlighting its essential features, ramifications across diverse domains, and the significant insights it can provide to human. Embark on an intriguing exploration of the ant kingdom, where collaboration, communication, and decentralized decision-making coalesce to create a complex system that offers significant lessons for human pursuits.
... In the three formation control approaches mentioned above, the formation needs to be designed in advance, which limits the flexibility of the formation, and members also lack the autonomy. Whereas the swarm control theory [24][25][26][27][28] proposed in recent years emphasizes the autonomy and intelligence of members in the formation. Swarm control theory originates from studies of colonies of ants and bees in nature, which has higher flexibility, autonomy, and robustness. ...
Article
Full-text available
Using multiple unmanned surface vehicle swarms to implement tasks cooperatively is the most advanced technology in recent years. However, how to find which swarm the unmanned surface vehicle belongs to is a meaningful job. So, this article proposed an artificial potential field-based swarm finding algorithm, which applies the potential field force directly to unmanned surface vehicles and leads them to their belonging swarm quickly and accurately. Meanwhile, the proposed algorithm can also maintain the formation stable while following the desired path. Based on the swarm finding algorithm, the artificial potential field-based collision avoidance method and the International Regulations for Preventing Collisions at Sea-based dynamic collision avoidance strategy are applied to the swarm control of multi-unmanned surface vehicles to enhance the performance in the dynamic ocean environment. Methods in this article are verified through numerical simulations to illustrate the feasibility and effectiveness of proposed schemes.
Article
Full-text available
The issue of controlling a swarm of autonomous unmanned surface vehicles (USVs) in a practical maritime environment is studied in this paper. A hierarchical control framework associated with control algorithms for the USV swarm is proposed. In order to implement the distributed control of the autonomous swarm, the control framework is divided into three task layers. The first layer is the tele-operated task layer, which delivers the human operator’s command to the remote USV swarm. The second layer deals with autonomous tasks (i.e. swarm dispersion, or avoidance of obstacles and/or inner-USV collisions), which are defined by specific mathematical functions. The third layer is the control allocation layer, in which the control inputs are designed by applying the sliding mode control method. The motion controller is proved asymptotically stable by using the Lyapunov method. Numerical simulation of USV swarm motion is used to verify the effectiveness of the control framework.
Article
In this paper, a hierarchical control framework with relevant algorithms is proposed to achieve autonomous navigation for an underactuated unmanned surface vehicle (USV) swarm. In order to implement automatic target tracking, obstacle avoidance and avoid collisions between group members, the control framework is divided into three layers based on task assignments: flocking strategy design, motion planning and control input design. The flocking strategy design transmits some basic orders to swarm members. Motion planning applies the potential function method and then improves it; thus, the issue of autonomous control is transformed into one of designing the velocity vector. In the last layer, the control inputs (surge force and yaw moment) are designed using the sliding mode method, and the problem of underactuation is handled synchronously. The proposed closed-loop controller is shown to be semi-asymptotically stable by applying Lyapunov stability theory, and the effectiveness of the proposed methodology is demonstrated via numeric simulations of a homogeneous USV swarm.
Article
Full-text available
Distributed coordination of groups of individuals accomplishing a common task without leaders, with little communication, and on the basis of self-organizing principles, is an important research issue within the study of collective behavior of animals, humans, and robots. The article shows how distributed coordination allows a group of evolved, physically linked simulated robots (inspired by a robot under construction) to display a variety of highly coordinated basic behaviors such as collective motion, collective obstacle avoidance, and collective approach to light, and to integrate them in a coherent fashion. In this way the group is capable of searching and approaching a lighted target in an environment scattered with obstacles, furrows, and holes, where robots acting individually fail. The article shows how the emerged coordination of the group relies upon robust self-organizing principles (e.g., positive feedback) based on a novel sensor that allows the single robots to perceive the group's "average" motion direction. The article also presents a robust solution to a difficult coordination problem, which might also be encountered by some organisms, caused by the fact that the robots have to be capable of moving in any direction while being physically connected. Finally, the article shows how the evolved distributed coordination mechanisms scale very well with respect to the number of robots, the way in which robots are assembled, the structure of the environment, and several other aspects.
Conference Paper
Full-text available
Wayfinding is a cognitive element of navigation that allows people to plan and form strategies prior to executing them. Wayfinding in large scale virtual environments is a complex task and even more so in dynamic virtual worlds. In these dynamic worlds everything, including the objects, the paths, and the landmarks, may be created, deleted, and moved at will. We propose a wayfinding tool using swarm creatures to aid users in such dynamic environments. The tool produces dynamic trails leading to desired destinations and generates teleport/warp gates. These are created as a consequence of swarm creatures exploring dynamic worlds. In this paper, we describe the swarm algorithms developed to create such a tool to generate wayfinding aids in dynamic virtual worlds.
Conference Paper
Full-text available
Robot simulation is a very important tool to the development of novel real-world techniques for cooperation of teams of robots. One major difficulty when trying to introduce students to robotics is that the teaching of major abstractions used to coordinate group robot behavior is not easily visualized -- it is not always true that one has enough robots available to be used in real demonstrations. In this paper, we attempt to improve the situation above by implementing a robot simulator for 5 (five) of the major abstractions used in robotics. This simulator concentrates on group coordination in a scenario where robots are required to find their way out of a room or maze. This paper describes our initial version of this simulator, as well as our future plans for the simulator, both in usage and in enhancement of the feature set.
Article
For two decades, multi-agent systems have been an attractive approach for problem solving and have been applied to a wide range of applications. Despite the lack of generic methodology, the reactive approach is interesting considering the properties it provides. This article presents a problem-solving model based on a swarm approach where agents interact using physics-inspired mechanisms. The initial problem and its constraints are represented through agents' environment, the dynamics of which is part of the problem-solving process. This model is then applied to localization and target tracking. Experiments assess our approach and compare it to widely-used classical algorithms.
Article
Every military faces decisions that ultimately impact its success in future operations. In 1851, Austrian military leaders made a decision to reject a revolutionary new breech-loading Dreyse needle gun that could be fired three times faster than muzzle-loaders. Their decision was based purely on budgetary issues. Austria's failure to acquire the Dreyse needle gun resulted in its defeat on the battlefield in July 1866 and the ultimate decline of the Austrian empire. The lessons learned apply today. Faced with declining budgets and the rapid advancement of new technologies, Air Force leaders face the same dilemma as their Austrian predecessors. Procuring the correct unmanned aerial vehicle (UAV) force structure will be a major challenge for the Air Force. This paper will identify the potential roles of micro and small UAVs in future conflicts. Based on research, this paper purports that these small low cost UAVs will be a significant force multiplier in the future. Budget and vehicle cost constraints will significantly impact the acquisition of large high-tech UAVs. Advances in technologies are rapidly increasing the capabilities of low cost micro and small UAVs. In addition, new concepts of operation, such as cooperative behavior protocols or swarming, will open the door to numerous missions once thought impossible for small low-cost, low-tech UAVs. To determine the utility of these smaller UAVs, this paper will analyze three main areas: the drivers, the enablers, and the missions. The drivers are the forces that sculpt the future requirement for smaller UAVs. The enablers, on the other hand, are the technologies and concepts of operation that give these smaller UAVs the capability to fulfill the future needs of the USAF. Finally, given the need and capability, the last section of this paper will discuss the missions that micro and small UAVs will fulfill in future conflicts.
Article
Swarming Unmanned Aerial Vehicles (UAVs) are the future of Intelligence, Surveillance and Reconnaissance (ISR). Swarms of hundreds of these vehicles, each equipped with multiple sensors, will one day fill the skies over hostile areas. As the sensors collect hundreds of gigabytes of data, telemetry data links will be unable to transmit the complete data picture to the ground in real time. The collected data will be stored on board the UAVs and selectively downloaded through queries issued from analysts on the ground. Analysts expect to find relevant sensor data within the collection of acquired sensor data. This expectation is not a quantified value, rather a confidence that this relevant data exists. An expectation of the likely quality of the available sensor information is determined by the user through the use of the methods and tools developed in this thesis. This work develops swarm coverage analysis models using position in time data from the swarm. With these models, a geometric analysis of the swarm is conducted that shows analysts when and where the swarm likely collected sensor data most relevant to a need. Convex hulls are used to calculate areas of coverage as well as swarm and sensor densities. Target profiling algorithms are developed that show target coverage over time from the swarm for each sensor type. Target-centric and sensor-centric analyses allow analysts to quickly determine where individual swarm agents were relative to a target at any point during the mission. Finally a series of visualizations of the swarm and targets are created that allow the analyst to view swarm activity from the perspective of individual swarm members or targets.
Chapter
An interacting multi-agent system in a network can model the evolution of a Nature-Inspired Smart System (NISS) exhibiting the four salient properties: (i) Collective, coordinated and efficient (ii) Self-organization and emergence (iii) Power law scaling or scale invariance under emergence (iv) Adaptive, fault tolerant and resilient against damage. We explain how these basic properties can arise among agents through random enabling, inhibiting, preferential attachment and growth of a multiagent system. The quantitative understanding of a Smart system with an arbitrary interactive topology is extremely difficult.
Article
In this work, we provide an introduction to an emerging field which has recently received considerable attention, namely the analysis and modeling of swarms. In a very general sense, the term swarm is usually meant to signify a group of objects (agents) that interact with one another and have a collective goal. Point-mass, particulate-like models are frequently used to simulate the behavior of groups comprised of individual units whose interaction are represented by inter-particle forces. The interaction “forces” can represent, for example in the case of Unmanned Airborne Vehicles (UAVs), motorized propulsion arising from inter-vehicle communication and then actuation resulting in thrust. For a swarm member, these forces have two main components, attraction and repulsion, with the fellow swarm members and the surrounding environment. This work develops and investigates (1) basic models of such systems, (2) properties of swarm models and (3) numerical algorithms, in particular temporally adaptive methods, for swarm-like systems.