Conference PaperPDF Available

Dealing with Fog of War in a Real Time Strategy Game Environment


Abstract and Figures

Bots for real time strategy (RTS) games provide a rich challenge to implement. A bot controls a number of units that may have to navigate in a partially unknown environment, while at the same time search for enemies and coordinate attacks to fight them down. It is often the case that RTS AIs cheat in the sense that they get perfect information about the game world to improve the performance of the tactics and planning behavior. We show how a multi-agent potential field based bot can be modified to play an RTS game without cheating, i.e. with incomplete information, and still be able to perform well without spending more resources than its cheating version in a tournament.
Content may be subject to copyright.
Dealing with Fog of War in a Real Time Strategy Game
Johan Hagelb¨
ack and Stefan J. Johansson
Abstract Bots for Real Time Strategy (RTS) games provide
a rich challenge to implement. A bot controls a number of units
that may have to navigate in a partially unknown environment,
while at the same time search for enemies and coordinate
attacks to fight them down. It is often the case that RTS AIs
cheat in the sense that they get perfect information about
the game world to improve the performance of the tactics
and planning behavior. We show how a multi-agent potential
field based bot can be modified to play an RTS game without
cheating, i.e. with incomplete information, and still be able to
perform well without spending more resources than its cheating
version in a tournament.
AReal-time Strategy (RTS)game is a game in which the
players use resource gathering, base building, technological
development and unit control in order to defeat their oppo-
nents, typically in some kind of war setting. An RTS game is
not turn-based in contrast to board games such as Risk and
Diplomacy. Instead, all decisions by all players have to be
made in real-time. The player usually has an isometric birds
view perspective of the battlefield although some 3D RTS
games allow different camera angles. The real-time aspect
makes the RTS genre suitable for multiplayer games since
it allows players to interact with the game independently of
each other and does not let them wait for someone else to
finish a turn.
In RTS games computer bots often cheat in the sense that
they get access to complete visibility (perfect information)
of the whole game world, including the positions of the
opponent units. Cheating is, according to Nareyek, ”very
annoying for the player if discovered” and he predicts the
game AIs to get a larger share of the processing power in
the future which in turn may open up for the possibility to
use more sophisticated AIs [1].
We will show how a bot that uses potential fields can
be modified to deal with imperfect information, i.e. the
parts of the game world where no own units are present
are unknown (usually referred to as Fog of War, or FoW).
We will also show that our modified bot with imperfect
information, named FoWbot, actually not only perform
equally good, compared to a version with perfect information
(called PIbot), but also that it at an average consumes less
computational power than its cheating counterpart.
Johan Hagelb¨
ack and Stefan J. Johansson are with the Department of Soft-
ware and Systems Engineering, Blekinge Institute of Technology, Box 520,
SE-372 25, Ronneby, Sweden. e-mail:,
A. Research Question and Methodology
The main research question of this paper is: Is it possible
to construct a bot without access to perfect information for
RTS games that perform as well as bots that have perfect
information? This breaks down to:
1) What is the difference in performance between using a
FoWbot compared to a PIbot in terms of a) the number
of won matches, and b) the number of units and bases
left if the bot wins?
2) To what degree will a field of exploration help the FoW
bot to explore the unknown environment?
3) What is the difference in the computational needs for
the FoWbot compared to the PIbot?
In order to approach the research questions above, we will
implement a FoW version of our original PIbot and compare
its performance, exploration and processing needs with the
B. Outline
First we describe the domain followed by a description of
our Multi-agent Potential Field (MAP F) player. In the next
section we describe the adjustments needed to implement a
working FoW bot and then we present the experiments and
their results. We finish by discussing the results, draw some
conclusions and line out possible directions for future work.
Open Real Time Strategy (ORTS ) [2] is a real-time strategy
game engine developed as a tool for researchers within AIin
general and game AIin particular. ORTS uses a client-server
architecture with a game server and players connected as
clients. Each timeframe clients receive a data structure from
the server containing the current state of the game. Clients
can then activate their units in various ways by sending
commands to them. These commands can be like move unit
Ato (x, y)or attack opponent unit Xwith unit A.All
client commands for each time frame are sent in parallel,
and executed in random order by the server.
Users can define different types of games in scripts where
units, structures and their interactions are described. All types
of games from resource gathering to full real time strategy
(RTS ) games are supported. We focus here on one type of
two-player game, Tankbattle, which was one of the 2007
ORTS competitions [2].
In Tankbattle each player has 50 tanks and five bases. The
goal is to destroy the bases of the opponent. Tanks are heavy
units with long fire range and devastating firepower but a
long cool-down period, i.e. the time after an attack before
the unit is ready to attack again. Bases can take a lot of
damage before they are destroyed, but they have no defence
mechanism so it may be important to defend own bases with
tanks. The map in a Tankbattle game has randomly generated
terrain with passable lowland and impassable cliffs.
The game contains a number of neutral units (sheep).
These are small, indestructible units moving randomly
around the map. The purpose of them is to make pathfinding
and collision detection more complex.
We have in our experiments chosen to use an environment
based on the best participants of the last year’s ORTS
tournament [3].
A. Descriptions of Opponents
The following opponents were used in the experiments:
1) NUS: The team NUS uses finite state machines and
influence maps in high-order planning on group level. The
units in a group spread out on a line and surround the
opponent units at Maximum Shooting Distance (MSD). Units
use the cool-down period to keep out of MSD. Pathfinding
and a flocking algorithm are used to avoid collisions.
2) UBC: This team gathers units in squads of 10 tanks.
Squads can be merged with other squads or split into two
during the game. Pathfinding is combined with force fields
to avoid obstacles and a bit-mask for collision avoidance.
Units spread out at MSD when attacking. Weaker squads
are assigned to weak spots or corners of the opponent unit
cluster. If an own base is attacked, it may decide to try to
defend the base.
3) WarsawB: The WarsawB team uses pathfinding with an
additional dynamic graph for moving objects. The units use
repelling force field collision avoidance. Units are gathered
in one large squad. When the squad attacks, its units spread
out on a line at MSD and attack the weakest opponent unit
in range.
4) Uofa06: Unfortunately, we have no description of how
this bot works, more than that it was the winner of the 2006
year ORT S competition. Since we failed in getting the 2007
version of the UofA bot to run without stability problems
under the latest update of the ORT S environment, we omitted
it from our experiments.
In 1985, Ossama Khatib introduced a new concept while
he was looking for a real-time obstacle avoidance approach
for manipulators and mobile robots. The technique, which he
called Artificial Potential Fields, moves a manipulator in a
field of forces. The position to be reached is an attractive pole
for the end effector (e.g. a robot) and obstacles are repulsive
surfaces for the manipulator parts [4]. Later on Arkin [5]
updated the knowledge by creating another technique using
superposition of spatial vector fields in order to generate
behaviours in his so called motor schema concept.
Many studies concerning potential fields are related to
spatial navigation and obstacle avoidance, see e.g. [6], [7].
The technique is really helpful for the avoidance of simple
obstacles even though they are numerous. Combined with an
autonomous navigation approach, the result is even better,
being able to surpass highly complicated obstacles [8].
Lately some other interesting applications for potential
fields have been presented. The use of potential fields in
architectures of multi agent systems has shown promising
results. Howard et al. developed a mobile sensor network
deployment using potential fields [9], and potential fields
have been used in robot soccer [10], [11]. Thurau et al. [12]
has developed a game bot which learns reactive behaviours
(or potential fields) for actions in the First-Person Shooter
(FPS) game Quake II through imitation.
In [13] we propose a methodology for creating a RTS
game bot based on Multi-agent Potential Fields (MA PF). This
bot was further improved in Hagelb¨
ack and Johansson [14]
and it is the improved version that we have used in this
We have implemented an O RTS client for playing Tankbat-
tle games based on Multi-agent Potential Fields (MA PF)
following the proposed methodology of Hagelb¨
ack and Jo-
hansson [13]. It includes the following six steps:
1) Identifying the objects
2) Identifying the fields
3) Assigning the charges
4) Deciding on the granularities
5) Agentifying the core objects
6) Construct the MAS Architecture
Below we will describe the creation of our MA PF solution.
A. Identifying objects
We identify the following objects in our applications:
Cliffs, Sheep, and own (and opponent) tanks, and base
B. Identifying fields
We identified five tasks in ORTS Tankbattle:
Avoid colliding with moving objects,
avoid colliding with cliffs, and
find the enemy,
destroy the enemy’s forces, and
defend the bases.
The latter task will not be addressed in this study (instead,
see Hagelb¨
ack and Johansson [14]), but the rest lead us to
three types of potential fields: Field of navigation,Strategic
field,Tactical field, and Field of exploration.
The field of navigation is generated by repelling static
terrain and may be pre-calculated in the initialisation phase.
We would like agents to avoid getting too close to objects
where they may get stuck, but instead smoothly pass around
The strategic field is an attracting field. It makes agents go
towards the opponents and place themselves at appropriate
distances from where they can fight the enemies.
Our own units, own bases and the sheep generate small
repelling fields. The purpose is that we would like our agents
to avoid colliding with each other or the bases as well as
avoiding the sheep.
The field of exploration helps the units to explore unknown
parts of the game map. Since it is only relevant in the case
we have incomplete information, it is not part of the PIbot
that we are about to describe now. More information about
the field of exploration is found in Section V-C.
C. Assigning charges
Each unit (own or enemy), base, sheep and cliff have a set
of charges which generate a potential field around the object.
All fields generated by objects are weighted and summed to
form a total field which is used by agents when selecting
actions. The initial set of charges were found using trial and
error. However, the order of importance between the objects
simplifies the process of finding good values and the method
seems robust enough to allow the bot to work good anyhow.
We have tried to use traditional AI methods such as genetic
algorithms to tune the parameters of the bot, but without
success. We used the following charges in the PIbot:1
a) The opponent units:
p(d) =
k1d, if d[0, M SD a[
c1d, if d[MSD a, M SD]
c2k2d, if d]M SD, M DR]
Unit k1k2c1c2MSD aMDR
Tank 2 0.22 24.1 15 7 2 68
Base 3 0.255 49.1 15 12 2 130
b) Own bases: Own bases generate a repelling field
for obstacle avoidance. Below in Equation 2 is the function
for calculating the potential pownB (d)at distance d(in tiles)
from the center of the base.
pownB (d) =
5.25 ·d37.5if d <= 4
3.5·d25 if d]4,7.14]
0if d > 7.14
c) The own tanks: The potential pownU (d)at distance
d(in tiles) from the center of an own tank is calculated as:
pownU (d) =
20 if d <= 0.875
3.2d10.8if d]0.875, l],
0if d >=l
d) Sheep: Sheep generate a small repelling field for
obstacle avoidance. The potential psheep(d)at distance d(in
tiles) from the center of a sheep is calculated as:
psheep(d) =
10 if d <= 1
1if d]1,2]
0if d > 2
1I= [a, b[denote the half-open interval where aI, but b /I
Fig. 1. Part of the map during a tankbattle game. The upper picture shows
our agents (light-grey circles), an opponent unit (white circle) and three
sheep (small dark-grey circles). The lower picture shows the total potential
field for the same area. Light areas has high potential and dark areas low
Figure 1 shows an example of a part of the map during
a Tankbattle game. The screen shots are from the 2D GUI
available in the ORTS server and a visualisation interface for
potentials that we have developed. The light ring around the
opponent unit, located at maximum shooting distance of our
tanks, is the distance our agents prefer to attack opponent
units from. The picture also shows the small repelling fields
generated by own agents and the sheep.
D. Finding the right granularity
Concerning the granularity, we use full resolution (down to
the point level) but only evaluate eight directions in addition
to the position where the unit is. However, this is done in
each time frame for each of our units.
E. Agentifying the objects
We put one agent in every own unit able to act in some
way (thus, the bases are excluded). We have chosen not to
simulate the opponent using agents, although that may be
possible, it is outside the scope of this experiment.
F. Constructing the MAS
All of our unit agents are communicating with a common
interface agent to get and leave information about the state
of the game such as to get the position of (visible) opponents,
and to submit the actions taken by our units. The bot also has
an attack coordinating agent that points out what opponent
units to attack, if there are several options.
1) Attack coordination: We use a coordinator agent to
globally optimise attacks at opponent units. The coordinator
aims to destroy as many opponent units as possible each
frame by concentrating fire on already damaged units. Below
is a description of how the coordinator agent works. After
the coordinator is finished we have a near-optimal allocation
of which of our agents that are dedicated to attack which
opponent units or bases.
The coordinator uses an attack possibility matrix. The i×k
matrix Adefines the opponent units i(out of n) within MSD
which can be attacked by our agents k(out of m) as follows:
ak,i =(1if the agent kcan attack opponent unit i
0if the agent kcannot attack opponent unit i
a0,0· · · am1,0
a0,n1· · · am1,n1
We also need to keep track of current hit points (H P ) of
the opponent units ias:
HP =
Let us follow the example below to see how the coordi-
nation heuristic works.
HP =
HP0= 2
HP1= 3
HP2= 3
HP3= 4
HP4= 4
HP5= 3
First we sort the rows so the highest priority targets (units
with low HP) are in the top rows. This is how the example
matrix looks like after sorting:
HP =
HP0= 2
HP1= 3
HP2= 3
HP5= 3
HP4= 4
HP3= 4
Next step is to find opponent units that can be destroyed
this frame (i.e. we have enough agents able to attack an
opponent unit to reduce its HP to 0). In the example we
have enough agents within range to destroy unit 0 and 1.
We must also make sure that the agents attacking unit 0 or
1 are not attacking other opponent units in A. This is done
by assigning a 0 value to the rest of the column in Afor all
agents attacking unit 0 or 1.
Below is the updated example matrix. Note that we have
left out some elements for clarity. These has not been altered
in this step and are the same as in matrix A2.
0000 0
0000 0
0000 0
0000 0
HP =
HP0= 2
HP1= 3
HP2= 3
HP5= 3
HP4= 4
HP3= 4
The final step is to make sure the agents in the remaining
rows (3 to 6) only attacks one opponent unit each. This is
done by, as in the previous step, selecting a target ifor each
agent (start with row 3 and process each row in ascending
order) and assign a 0 to the rest of the column in Afor the
agent attacking i. This is how the example matrix looks like
after the coordinator is finished:
HP =
HP0= 2
HP1= 3
HP2= 3
HP5= 3
HP4= 4
HP3= 4
In the example the fire coordinator agent have optimised
attacks to:
Unit 0 is attacked by agents 0 and 3. It should be
Unit 1 is attacked by agents 1, 2 and 6. It should be
Unit 5 is attacked by agent 6. Its HP should be reduced
to 2.
Unit 4 is attacked by agents 4 and 5. Its HP should be
reduced to 2.
Units 2 and 3 are not attacked by any agent.
2) The Internals of the Coordinator Agent: The coordi-
nator agent first receive information from each of the own
agents. It contains its positions and ready-status, as well as
a list of the opponent units that are within range. Ready-
status means that an agent is ready to fire at enemies. After
an attack a unit has a cool-down period while it cannot fire.
From the server, it will get the current hit point status of the
opponent units.
Now, the coordinator filters the agent information so that
only those agents that are i) ready to fire and ii). have at
least one opponent unit within MSD, are left.
For each agent kthat is ready to fire, we iterate through
all opponent units and bases. To see if kcan attack unit i
we use a three level check:
1) Agent kmust be within Manhattan distance2* 2 of i
(very fast but inaccurate calculation)
2) Agent kmust be within real (Euclidean) distance of i
(slower but accurate calculation)
3) Opponent unit imust be in line of sight of k(very
slow but necessary to detect obstacles in front of i)
The motivation behind the three-level check is to start with
fast but inaccurate calculations, and for each level passed a
slower and more accurate check is performed. This reduces
2The Manhattan distance between two coordinates (x1, y1),(x2, y2)is
given by abs(x1x2) + abs(y1y2).
CPU usage by skipping demanding calculations such as line-
of-sight for opponent units or bases that are far away.
Next step is to sort the rows in Ain ascending order
based on their HP (prioritise attacking damaged units). If
two opponent units has same hit points left, the unit iwhich
can be attacked by the largest number of agents kshould be
first (i.e. concentrate fire to damage a single unit as much
as possible rather than spreading the fire). When an agent
attacks an opponent unit it deals a damage value randomly
chosen between the attacking unit’s minimum (mindmg) and
maximum (maxdmg) damage. A unit hit by an attack get its
HP reduced by the damage value of the attacking unit minus
its own armour value. The armour value is static and a unit’s
armour cannot be destroyed.
The next step is to find opponent units which can be
destroyed this frame. For every opponent unit iin A, check
if enough agents ucan attack ito destroy it as:
a(k, i)) ·(damageuarmouri)>=HPi(12)
armouriis the armour value for the unit type of i(0 for
marines and bases, 1 for tanks) and damageu=mindmg +
p·(maxdmg mindmg), where p[0,1]. We have used a
p value of 0.75, but it can be changed to alter the possibility
of actually destroying opponent units.
If more agents can attack ithan is necessary to destroy
it, remove the agents with the most occurrences in Afrom
attacking i. The motivation behind this is that the agents u
with most occurrences in Ahas more options when attacking
other units.
At last we must make sure the agents attacking idoes not
attack other opponent units in A. This is done by assigning
a 0 value to the rest of the column.
The final step is to make sure agents not processed in the
previous step only attacks one opponent unit each. Iterate
through every ithat cannot be destroyed but can be attacked
by at least one agent k, and assign a 0 value to the rest of
the column for each kattacking i.
To enable FoW for only one client, we made a minor
change in the ORTS server. We added an extra condition to
an IFstatement that always enabled fog of war for client 0.
Due to this, our client is always client 0 in the experiments
(of course, it does not matter from the game point of view
if the bots play as client 0 or client 1).
To deal with fog of war we have made some changes to
the bot described in Hagelb¨
ack and Johansson [14]. These
changes deal with issues like remember locations of enemy
bases, explore unknown terrain to find enemy bases and
units, and to remember the terrain (i.e. the positions of the
impassable cliffs at the map) even when there are no units
near. Another issue is dealing with performance since these
changes are supposed to require more runtime calculations
than the PIbot. Below are proposed solutions to these issues.
A. Remember Locations of the Enemies
In ORT S a data structure with the current game world
state is sent each frame from the server to the connected
clients. If fog of war is enabled, the location of an enemy
base is only included in the data structure if an own unit is
within visibility range of the base. It means that an enemy
base that has been spotted by an own unit and that unit is
destroyed, the location of the base is no longer sent in the
data structure. Therefore our bot has a dedicated global map
agent to which all detected objects are reported. This agent
always remembers the location of previously spotted enemy
bases until a base is destroyed, as well as distributes the
positions of detected enemy tanks to all the own units.
The global map agent also takes care of the map sharing
concerning the opponent tank units. However, it only shares
momentary information about opponent tanks that are within
the detection range of at least one own unit. If all units that
see a certain opponent tank are destroyed, the position of
that tank is no longer distributed by the global map agent
and that opponent disappears from our map.
B. Dynamic Knowledge about the Terrain
If the game world is completely known, the knowledge
about the terrain is static throughout the game. In the original
bot, we created a static potential field for the terrain at the
beginning of each new game. With fog of war, the terrain
is partly unknown and must be explored. Therefore our bot
must be able to update its knowledge about the terrain.
Once the distance to the closest impassable terrain has
been found, the potential is calculated as:
pterrain (d) =
10000 if d <= 1
5/(d/8)2if d]1,50]
0if d > 50
C. Exploration
Since the game world is partially unknown, our units have
to explore the unknown terrain to locate the hidden enemy
bases. The solution we propose is to assign an attractive field
to each unexplored game tile. This works well in theory
as well as in practice if we are being careful about the
computation resources spent on it.
The potential punknown generates in a point (x, y)is
calculated as follows:
1) Divide the terrain tile map into blocks of 4x4 terrain
2) For each block, check every terrain tile in the block. If
the terrain is unknown in ten or more of the checked
tiles, the whole block is considered unknown.
3) For each block that needs to be explored, calculate the
Manhattan Distance md from the center of the own
unit to the center of the block.
4) Calculate the potential punknown each block generates
using Equation 14 below.
5) The total potential in (x, y)is the sum of the potentials
each block generates in (x, y).
FoWbot PIbot
Team Win % Units Base Win % Units Base
NUS 100% 29.74 3.62 100% 28.05 3.62
WarsawB 98% 32.35 3.19 99% 31.82 3.21
UBC 96% 33.82 3.03 98% 33.19 2.84
Uofa.06 100% 34.81 4.27 100% 33.19 4.22
Average 98.5% 32.68 3.53 99.25% 31.56 3.47
FoWbot — 66% 9.37 3.23
PIbot 34% 4.07 1.81 —
punknown (md) = ((0.25 md
8000 )if md <= 2000
0if md > 2000 (14)
We have conducted three sets of experiments:
1) Show the performance of FoWbot playing against bots
with perfect information.
2) Show the impact of the field of exploration in terms of
the detected percentage of the map.
3) Show computational resources needed for FoWbot
compared to the PIbot.
A. Performance
To show the performance of our bot we have run 100
games against each of the top teams NUS, WarsawB, UBC
and Uofa.06 from the 2007 years ORT S tournament as well
as 100 matches against our PIbot. In the experiments the first
game starts with a randomly generated seed, and the seed is
increased by 1 for each game played. The same start seed is
used for all four opponents.
The experiment results presented in Table II shows that
our MA PF based FoWbot wins over 98% of the games even
though our bot has imperfect information and the opponent
bots have perfect information about the game world.
We may also see that when PIbot and FoWbot are facing
each other, FoWbot wins (surprisingly enough) about twice
as often as PIbot. We will come back to the analysis of these
results in the discussion.
B. The Field of Exploration
We ran 20 different games in this experiment, each where
the opponent faced both a FoWbot with the field of explo-
ration enabled, and one where this field was disabled (the
rest of the parameters, seeds, etc. were kept identical).
Figure 2 shows the performance of the exploration field.
It shows how much area, for both types of bots, that
is explored, given how long a game has proceeded. The
standard deviation increases with the time since only a few
of the games last longer than three minutes.
In Table III, we see that the use of the field of explo-
ration (as implemented here) does not improve the results
dramatically. The differences are not statistically significant.
0 50 100 150 200 250 300
No FoW field
FoW field
Fig. 2. The average explored area given the current game time for a bot
using the field of exploration, compared to one that does not.
Version Won Lost Avg. Units Avg. Bases
With FoE 20 0 28.65 3.7
Without FoE 19 1 27.40 3.8
C. Computational Resources
To show the computational resources needed we have run
100 games using the PIbot against team NUS and 100 games
with the same opponent using the FoWbot. The same seeds
are used in both series of runs. For each game we measured
the average time (in milliseconds) that the bot uses in each
game frame and the number of own units left. Figure 3 shows
the average time for both our bots in relation to number of
own units left.
The performance shows good results, but the question
remains: could it be better without FoW? We ran identical
20 25 30 35 40 45 50
own units
No FoW
Fig. 3. The average frame time used for PIbot and FoWbot against team
experiments which showed that the average winning percent-
age was slightly higher for the PIbot compared to the FoWbot
when they faced the top teams of ORT S 2007, see Table II.
We can also see that the number of units, as well as bases
left are marginally higher for the FoWbot compared to the
PIbot. However these results are not statistically significant.
Where we actually see a clear difference is when PIbot
meets FoWbot and surprisingly enough FoWbot wins 66 out
of 100 games. We therefore have run a second series of
100 matches with a version of the PIbot where maximum
detection range (i.e. the range at which a bot starts to sense
the opponents’ potential field) was decreased from 1050 to
450. This is not the same as the visibility range in the FoWbot
(which is just 160). Remember that the FoWbot has a global
map agent that helps the units to distribute the positions of
visible enemies to units that do not have visual contact with
the enemy unit in question. However, the decrease of the
maximum detection range in PIbot makes it less prone to
perform single unit attacks and the FoWbot only wins 55 out
of 100 games in our new series of matches, which leaves a
37% probability that PIbot is the better of the two (compared
to 0.2% in the previous case).
In Figure 2 we see that using the field of exploration
in general gives a higher degree of explored area in the
game, but the fact that the average area is not monotonically
increasing as the games go on may seem harder to explain.
One plausible explanation is that the games where our units
do not get stuck in the terrain will be won faster as well
as having more units available to explore the surroundings.
When these games end, they do not contribute to the average
and the average difference in explored areas will decrease.
Does the field of exploration contribute to the perfor-
mance? Is it at all important to be able to explore the map?
Our results (see Table III) indicate that it in this case may
not be that important. However, the question is complex. Our
experiments were carried out with an opponent bot that had
perfect information and thus was able to find our units. The
results may have been different if also the opponent lacked
perfect information.
Concerning the processor resources, the average compu-
tational effort is initially higher for the PIbot. The reason
for that is that it knows the positions of all the opponent
units, thus include all of them in the calculations of the
strategic potential field. As the number of remaining units
decrease the FoWbot has a slower decrease in the need for
computational power than the PIbot. This is because there is
a comparably high cost to keep track of the terrain and the
field of navigation that it generates, compared to having it
static as in the case of the PIbot.
This raise the question of whether having access to perfect
information is an advantage compared to using a FoWbot. It
seems to us, at least in this study, that it is not at all the case.
Given that we have at an average around 32 units left when
the game ends, the average time frame probably requires
more from the PIbot, than from the FoWbot. However, that
will have to be studied further before any general conclusions
may be drawn in that direction.
Finally some comments on the methodology of this study.
There are of course details that could have been adjusted in
the experiments in order to e.g. balance the performance of
PIbot vs FoWbot. As an example, by setting the detection
range in the PIbot identical to the one in the FoWbot and
at the same time add the global map agent (that is only
used in the FoWbot today) to the PIbot. However, it would
significantly increase the computational needs of the PIbot to
do so. We are of course eager to improve our bots as far as
possible (for the next ORT S competition 2009; a variant of
our PIbot won the 2008 competition in August with a win
percentage of 98%), and every detail that may improve it
should be investigated.
Our experiments show that a MAPF based bot can be
modified to handle imperfect information about the game
world, i.e. FoW. Even when facing opponents with perfect
information our bot wins over 98% of the games. The
FoWbot requires about the same computational resources as
the PIbot, although it adds a field of exploration that increases
the explored area of the game.
Future work include a more detailed experiment regarding
the computational needs as well as an attempt to utilise
our experiences from these experiments in the next ORTS
tournament, especially the feature that made FoWbot beat
We would like to thank Blekinge Institute of Technology
for supporting our research and the organisers of ORT S,
especially Michael Buro, for providing us with an interesting
[1] Alexander Nareyek. Ai in computer games. Queue, 1(10):58–65, 2004.
[2] Michael Buro. ORTS — A Free Software RTS Game Engine, 2007. URL last visited on 2008-08-
[3] Michael Buro, Marc Lanctot, and Sterling Orsten. The second annual
real-time strategy game ai competition. In Proceedings of GAMEON
NA, Gainesville, Florida, 2007.
[4] O. Khatib. Real-time obstacle avoidance for manipulators and mobile
robots. The International Journal of Robotics Research, 5(1):90–98,
[5] R. C. Arkin. Motor schema based navigation for a mobile robot. In
Proceedings of the IEEE International Conference on Robotics and
Automation, pages 264–271, 1987.
[6] J. Borenstein and Y. Koren. The vector field histogram: fast obstacle
avoidance for mobile robots. IEEE Journal of Robotics and Automa-
tion, 7(3):278–288, 1991.
[7] M. Massari, G. Giardini, and F. Bernelli-Zazzera. Autonomous
navigation system for planetary exploration rover based on artificial
potential fields. In Proceedings of Dynamics and Control of Systems
and Structures in Space (DCSSS) 6th Conference, 2004.
[8] J. Borenstein and Y. Koren. Real-time obstacle avoidance for fast
mobile robots. IEEE Transactions on Systems, Man, and Cybernetics,
19:1179–1187, 1989.
[9] A. Howard, M. Matari´
c, and G.S. Sukhatme. Mobile sensor network
deployment using potential fields: A distributed, scalable solution to
the area coverage problem. In Proceedings of the 6th International
Symposium on Distributed Autonomous Robotics Systems (DARS02),
[10] S.J. Johansson and A. Saffiotti. An electric field approach to au-
tonomous robot control. In RoboCup 2001, number 2752 in Lecture
notes in artificial intelligence. Springer Verlag, 2002.
[11] Thomas R ¨
ofer, Ronnie Brunn, Ingo Dahm, Matthias Hebbel, Jan
Homann, Matthias J¨
ungel, Tim Laue, Martin L¨
otzsch, Walter Nistico,
and Michael Spranger. GermanTeam 2004 - the german national
Robocup team, 2004.
[12] C. Thurau, C. Bauckhage, and G. Sagerer. Learning human-like
movement behavior for computer games. In Proc. 8th Int. Conf. on
the Simulation of Adaptive Behavior (SAB’04), 2004.
[13] Johan Hagelb¨
ack and Stefan J. Johansson. Using multi-agent potential
fields in real-time strategy games. In L. Padgham and D. Parkes,
editors, Proceedings of the Seventh International Conference on Au-
tonomous Agents and Multi-agent Systems (AAMAS), 2008.
[14] Johan Hagelb¨
ack and Stefan J. Johansson. The rise of potential fields
in real time strategy bots. In Proceedings of Artificial Intelligence and
Interactive Digital Entertainment (AIIDE), 2008.
... Potential fields have also been combined with A* path-finding to avoid local traps [18]. Hagelbäck and Johansson [19] presented a multi-agent potential fields based bot able to deal with fog-of-war in the Tankbattle game. Avery et al. [3] and Smith et al. [49] co-evolved influence map trees for spatial reasoning in RTS games. ...
... Baier et al. proposed an algorithm to guide an agent through both known and partially known terrain in order to catch a moving target in video games [19]. Hagelbäck and Johansson explored the use of potential fields to discover unvisited portions of a real-time strategy game map with the goal of creating a better computer AI for the game [20]. Our work, in contrast, focuses on uneven exploration in sparse, dungeonlike environments, where exhaustive approaches compete with critical resource efficiency. ...
Roguelike games generally feature exploration problems as a critical, yet often repetitive element of gameplay. Automated approaches, however, face challenges in terms of optimality, as well as due to incomplete information, such as from the presence of secret doors. This paper presents an algorithmic approach to exploration of roguelike dungeon environments. Our design aims to minimize exploration time, balancing coverage and discovery of secret areas with resource cost. Our algorithm is based on the concept of occupancy maps popular in robotics, adapted to encourage efficient discovery of secret access points. Through extensive experimentation on NetHack maps we show that this technique is significantly more efficient than simpler greedy approaches. We further investigate optimized parameterization for the algorithm through a comprehensive data analysis. These results point towards better automation for players as well as heuristics applicable to fully automated gameplay.
... The time travel in crowds can have many applications. In games, a nice example is the "fog of war" [7]. This term in video games [1] refers to enemy units, and often terrain, being hidden from the player. ...
Conference Paper
The processing time to simulate crowds for games or simulations is a real challenge. While the increasing power of processing capacity is a reality in the hardware industry, it also means that more agents, better rendering and most sophisticated Artificial Intelligence (AI) methods can be used, so again the computational time is an issue. Despite the processing cost, in many cases the most interesting period of time in a game or simulation is far from the beginning or in a specific known period, but it is still necessary to simulate the whole time (spending time and processing capacity) to achieve the desired period of time. It would be useful to fast forward the time in order to see a specific period of time where simulation result could be more meaningful for analysis. This paper presents a method to provide time travel in Crowd Simulation. Based on crowd features, we compute the expected variation in velocities and apply that for time travel in crowd simulation.
... The challenge is how to navigate scout units in a right way, in other words, gathering more terrain data, in a short time, and avoiding damage. Potential field technique [9] has been used to deal with fog of war in Wargus 1 (a clone of WarCraft II 2 ), to reveal the covered map. [15] presents a heuristic navigation tactic for scout units in collecting opponents' information. ...
Conference Paper
Full-text available
Real-time strategy (RTS) is a sub-genre of strategy video games. RTS games are more realistic with dynamic and time-constraint game playing, by abandoning the turn-based rule of its ancestors. Playing with and against computer-controlled players is a pervasive phenomenon in RTS games, due to the convenience and the preference of groups of players. Hence, better game-playing agents are able to enhance game-playing experience by acting as smart opponents or collaborators. One-way of improving game-playing agents' performance, in terms of their economic-expansion and tactical battlefield-arrangement aspects, is to understand the game environment. Traditional commercial RTS game-playing agents address this issue by directly accessing game maps and extracting strategic features. Since human players are unable to access the same information, this is a form of "cheating AI", which has been known to negatively affect player experiences. Thus, we develop a scouting mechanism for RTS game-playing agents, in order to enable game units to explore game environments automatically in a realistic fashion. Our research is grounded in prior robotic exploration work by which we present a hierarchical multi-criterion decision-making (MCDM) strategy to address the incomplete information problem in RTS settings.
This research identifies specific communication sensor features vulnerable to fog and provides a method to introduce them into an Advanced Framework for Simulation, Integration, and Modeling (AFSIM) wargame scenario. Military leaders use multiple information sources about the battlespace to make timely decisions that advance their operational objectives while attempting to deny their opponent’s actions. Unfortunately, the complexities of battle combined with uncertainty in situational awareness of the battlespace, too much or too little intelligence, and the opponent’s intentional interference with friendly command and control actions yield an abstract layer of battlespace fog. Decision-makers must understand, characterize and overcome this “battlespace fog” to accomplish operational objectives. This research proposes a novel tool, the Fog Analysis Tool (FAT), to automatically compile a list of communication and sensor objects within a scenario and list options that may impact decision-making processes. FAT improves wargame realism by introducing and standardizing fog levels across communication links and sensor feeds in an AFSIM scenario. Research results confirm that FAT provides significant benefits and enables the measurement of fog impacts to tactical command and control decisions within AFSIM scenarios.
Conference Paper
Roguelike games generally feature exploration problems as a critical, yet often repetitive element of gameplay. Automated approaches, however, face challenges in terms of optimality. This paper presents an approach to exploration of roguelike dungeon environments. Our design, based on the concept of occupancy maps popular in robotics, aims to minimize exploration time, balancing coverage with resource cost. Through extensive experimentation on NetHack maps we show that this technique is significantly more efficient than simpler greedy approaches. Results point towards better automation for players as well as heuristics for fully automated gameplay.
In current games, entire cities can be rendered in real time into massive virtual worlds. In addition to the enormous details of geometry, rendering, effects (e.g., particles), sound effects, and so on, nonplayable characters must also be animated and rendered, and they must interact with the environment and among themselves. Indeed, the computation time of all such data is expensive. Consequently, game designers should define priorities so that more resources can be allocated to generate better graphics, setting aside behavioral aspects. In huge environments, some of the actions/behaviors that should be processed can be nonvisible to the players (occluded) or even visible but far away. Normally, in such cases, the common decision is to turn off such processing. However, hidden enemy behaviors that are not processed can result in nonrealistic feedback to the player. In this article, we aim to provide a method to preserve the motion of nonvisible characters while maintaining a compromise with the needed computational time of background behaviors. We apply this idea specifically in crowd collision behavior, proposing nonavoiding collision crowds. Such crowds do not have collision avoidance behaviors but preserve their motion as typical crowds. We propose a mathematical technique to describe how people are affected by others, so collision avoidance methods are not necessarily computed (they can be turned off, which leads to a reduction in the required computational time). Results show that our method replicates the behavior well (velocities, densities, and time) when compared to a free-of-collision method.
Unlike the situation with board games, artificial intelligence (AI) for real-time strategy (RTS) games usually suffers from numerous possible future outcomes because the state of the game is continuously changing in real time. Furthermore, AI is also required to be able to handle the increased complexity within a small amount of time. This constraint makes it difficult to build AI for RTS games with current state-of-the art intelligence techniques. A human player, on the other hand, is proficient in dealing with this level of complexity, making him a better game player than AI bots. Human players are especially good at controlling many units at the same time. It is hard to explain the micro-level control skills needed with only a few rules programmed into the bots. The design of micromanagement skills is one of the most difficult parts in the StarCraft AI design because it must be able to handle different combinations of units, army size, and unit placement. The unit control skills can have a big effect on the final outcome of a full game in professional player matches. For StarCraft AI competitions, they employed a relatively simple scripted AI to implement the unit control strategy. However, it is difficult to generate cooperative behavior using the simple AI strategies. Although there has been a few research done on micromanagement skills, it is still a challenging problem to design human-like high-level control skills. In this paper, we proposed the use of imitation learning based on human replays and influence map representation. In this approach, we extracted huge numbers of cases from the replays of experts and used them to determine the actions of units in the current game case. This was done without using any hand-coded rules. Because this approach is data-driven, it was essential to minimize the case search times. To support fast and accurate matching, we chose to use influence maps and data hashing. They allowed the imitation system to respond within a small amount time (one frame, 0.042 seconds). With a very large number of cases (up to 500,000 cases), we showed that it is possible to respond competitively in real-time, with a high winning percentage in micromanagement scenarios.
Full-text available
In this paper a navigation system for an autonomous planetary exploration rover, based on artificial potential fields, is presented. The rover moves to a predefined final position avoiding the obstacles detected by an artificial vision system developed for that purpose. Every obstacle is associated with a repulsive field, while the target point is associated with an attractive field. All these potential fields are combined to determine the total field at the rover position and its gradient, from which the direction of the next step of the rover is defined. Numerical and experimental tests of the algorithm are reported.
Full-text available
Modern Computer Game AI still relies on rule-based approaches, so far failing to develop a convinc-ing, human-like opponent. Towards the development of more human-like computer game agents, we pro-pose an approach for learning strategies by observa-tion of human players, principally viewing the design of a computer game agent as a problem of pattern recognition. First we introduce a Neural Gas based grid learning of an internal representation of the 3D game-world. Then we discuss the use of a learn-ing potential fields approach, establishing a mapping from world state vectors to corresponding potential field forces for agent guidance. The training data be-ing used is acquired by observation of human players acting in the 3D game environment of a commercial computer game. Finally, some experiments are pre-sented.
Conference Paper
Full-text available
Bots for Real Time Strategy (RTS) games provide a rich challenge to implement. A bot controls a number of units that may have to navigate in a partially unknown environment, while at the same time search for enemies and coordinate attacks to fight them down. Potential fields is a technique originating from the area of robotics where it is used in controlling the navigation of robots in dynamic environments. Although attempts have been made to transfer the technology to the gaming sector, assumed problems with efficiency and high costs for implementation have made the industry reluctant to adopt it. We present a Multi-agent Potential Field based bot ar- chitecture that is evaluated in a real time strategy game setting and compare it, both in terms of performance, and in terms of softer at- tributes such as configurability with other state-of-the-art solutions. Although our solution did not reach the performance standards of traditional RTS bots in the test, we see great unexploited benefits in using multi-agent potential field based solutions in RTS games.
Conference Paper
Full-text available
Bots for Real Time Strategy (RTS) games provide a rich challenge to implement. A bot controls a number of units that may have to navigate in a partially unknown environment, while at the same time search for enemies and coordinate attacks to fight them down. Potential fields is a technique originating from the area of robotics where it is used in controlling the navigation of robots in dynamic environments. Although attempts have been made to transfer the technology to the gaming sector, assumed problems with efficiency and high costs for implementation have made the industry reluctant to adopt it. Our demo shows the use of Multi-agent Potential Fields (MAPF) in an open source RTS game. We will demonstrate both the potential fields as such, and the coordination of the agents.
Real-time strategy (RTS) games are complex decision domains which require quick reactions as well as strate-gic planning and adversarial reasoning. In this paper we describe the second RTS game AI tournament, which was held in June 2007, the competition entries that par-ticipated, and plans for next year's tournament.
This paper presents a unique real-time obstacle avoidance approach for manipulators and mobile robots based on the artificial potential field concept. Collision avoidance, tradi tionally considered a high level planning problem, can be effectively distributed between different levels of control, al lowing real-time robot operations in a complex environment. This method has been extended to moving obstacles by using a time-varying artificial patential field. We have applied this obstacle avoidance scheme to robot arm mechanisms and have used a new approach to the general problem of real-time manipulator control. We reformulated the manipulator con trol problem as direct control of manipulator motion in oper ational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and kine matic transformation. Outside the obstacles' regions of influ ence, we caused the end effector to move in a straight line with an upper speed limit. The artificial potential field ap proach has been extended to collision avoidance for all ma nipulator links. In addition, a joint space artificial potential field is used to satisfy the manipulator internal joint con straints. This method has been implemented in the COSMOS system for a PUMA 560 robot. Real-time collision avoidance demonstrations on moving obstacles have been performed by using visual sensing.
If you've been following the game development scene, you've probably heard many remarks such as: "The main role of graphics in computer games will soon be over; artificial intelligence is the next big thing!" Although you should hardly buy into such statements, there is some truth in them. The quality of AI (artificial intelligence) is a high-ranking feature for game fans in making their purchase decisions and an area with incredible potential to increase players' immersion and fun.
Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.