Documentos de Académico
Documentos de Profesional
Documentos de Cultura
2 | Page
Abstract
gather information and gain insight into what type of artificial player a
game AI. As a result of the data analysis and research, it was found that in
order to create a realistic RTS game AI the AI design must be broken down
into two main categories: tactics and strategy. Also, those categories must
realistic AI for a RTS game. This can be achieved by implementing the basic
RTS game it is also possible for other types of applications such as military to
adapt this research and use it for its own applications. Future research in
3 | Page
two RTS games are alike. Even new games within the same series such as
“Warcraft” to “Warcraft II” will have new features which will require AI game
4 | Page
Table of Contents
Abstract.................................................................................................... ........3
1.0 Introduction.................................................................................. ..............6
1.10 Major Terms Defined..............................................................................6
1.11 Artificial Intelligence..........................................................................6
1.12 Intelligent Agent................................................................................6
1.13 Realistic............................................................................................. .6
1.14 Real-Time Strategy Game................................................................... 7
1.20 History............................................................................................... ....7
1.21 History of Artificial Intelligence..........................................................7
1.22 History of Computer Games...............................................................8
1.23 History of Real-Time Strategy Game Artificial Intelligence.................9
1.30 Research Strategy...............................................................................10
2.0 Central Question......................................................................................10
2.10 Importance................................................................................ ..........11
2.20 Scope..................................................................................................12
3.0 Anticipated Findings.................................................................................12
4.0 Evidence and Discussion of Findings........................................................13
4.10 Tactics.................................................................................................13
4.11 Combat............................................................................................13
4.12 Path-Finding.....................................................................................14
4.20 Strategy..............................................................................................18
4.21 Sub-Goal Identification via Scouting................................................18
4.22 Engaging the Enemy........................................................................20
4.23 Learning with Artificial Neural Networks..........................................22
4.24 Economy Management....................................................................26
5.0 Overall Discussion and Conclusion...........................................................29
5.10 Future Research .................................................................................30
6.0 Appendix..................................................................................................31
6.10 A* Algorithm.................................................................................. ......31
6.20 The Simple Swarms Algorithm............................................................32
6.30 Formation Identification with an Influence Map...................................33
5 | Page
6.40 Survey Results..................................................................................... 34
7.0 References...............................................................................................35
1.0 Introduction
knowledge and then acts upon that knowledge (Schwab, 2004, p. 3). This is
(Weiss, 1999, p. 29). For the scope of this paper, the environment assigned
the game. This design objective will be broken-down into sub objectives as
1.13 Realistic
Within the scope of this paper realistic refers to the ability of an IA to
make a human player believe they are playing against another human
6 | Page
results are in the appendix). In addition, the survey concluded that people
do not want the IA to cheat (having knowledge of things the human has no
knowledge of) they want the IA to have the same amount of knowledge as
they do.
time. Also, RTS games are “generally understood to be in the harvest, build,
Conquer” series, and Ensemble Studios “Age of Empires” series. The general
Arts “SimCity” or Lionhead Studio’s game “Black and White” as these games
fall under the genres of “city-building” games and “god games” respectively.
1.20 History
Zealand newspaper. It was entitled “Darwin among the Machines” and was
7 | Page
written by a man named Cellarius. The most provocative statement made in
the article was that one day machines would, “hold supremacy over the
world and its inhabitants” (Butler, 1914, p. 185). However, it was not until
1956 when a computer scientist named John McCarthy actually coined the
a highly debated topic. There was a patent filed on January 25, 1947 that
1948). However, another source found states that the first game ever
National Laboratory (Charles et al., 2008, p. 2). His game, “Tennis for Two”,
al., 2008).” To further add to the confusion over the first computer game
This game is generally regarded as the first computer game ever created as
computer, whereas the above games were all built on analog machines
8 | Page
However, it was not until the 1970’s when computer game programmers
First attempts were rudimentary at best as the agent’s did not seem to make
decisions on their own. These agents followed set patterns and paths based
come a long way since then but is still leaps and bounds from perfection. In
realistic computer player who not only reacts to the actions of that player,
but also seems to make its own decisions. The “predictability” of the earlier
emerging type of game. The game was for the Sega Genesis console and
was called, “Herzog Zwei”, which was a “two-player game in which the
object[ive] [was] to destroy the enemy’s base (Geryk, 1998).” This game
was also the first RTS game to incorporate a basic level of AI. As Schwab
explains, “the world gets it first taste of bad pathfinding (Schwab, 2004, p.
5)”. Ever since then AI has been improving in RTS games. Games such as,
“Command & Conquer Red Alert” and “StarCraft” paved the way for
9 | Page
and “Age of Empires III” that incorporate various advanced AI techniques that
fool a human player into believing they are playing against another human
player. Dave Pottinger, lead programmer of Age of Empires III and director of
played a fair game (Staff, 2005).” Not only is the human player now fooled
into believing they are playing another human, the AI achieves this fairly by
The research strategy first adopted for this paper was to use a variety of
forum (www.aigamedev.com) for the specific direction I should take for the
purpose of this paper, I also asked if I could interview someone who works in
one of the free online tools … If you ask open-ended questions too, it’ll give
survey was created which helped gather perceptual data as to what type of
The central question (CQ) of this paper is, “Can a Realistic Artificial
10 | P a g e
topic of AI was chosen because I am extremely interested in it. Also, AI is
information. After choosing a general topic the first CQ was chosen, which
computer game’s that a specific genre would be chosen. The genre chosen
was the RTS genre as I am very interested in this type of game and they are
2.10 Importance
The study of AI in RTS games is an extremely important topic, not only for
gamers, but for the entire world. Specifically, government and industry are
beginning to benefit from this type of research as RTS game AI has to be able
example, military can take this type of AI and program it directly into
11 | P a g e
2.20 Scope
game contains enough material to fill hundreds of books. The main intention
a realistic RTS game AI. It is important to note that there are many ways of
achieving this and this paper is aimed to discover and present the most
possible as there are many different types of RTS games. This paper is
intended to provide the framework for creating a realistic RTS game AI.
AI will be able to fully emulate a human and fool a human player into
believing they are playing against another human. In addition, this will be
12 | P a g e
4.0 Evidence and Discussion of Findings
any type of war. A General is usually the one who orchestrates an entire war.
In order to win a war the General must consider many variables; however, it
is felt that all of these variables can be categorized into two separate
these two separate branches which are tactics and strategy. The reason for
splitting the AI into these two separate categories is that it then becomes
4.10 Tactics
Tactics is defined as, “the branch of military science dealing with detailed
category can then be further broken into smaller, more manageable sub-
categories, but this paper will focus on these two sub-categories as they are
4.11 Combat
Combat will refer to the act of individual units choosing what attacks and
13 | P a g e
with the Strategy side where the Strategy IA is telling the units where and
what to attack. In most RTS games a particular unit only has one way of
attacking and one way of defending as this is the easiest way to code the AI
as you only have one Attack value and one Defense value. However, in
newer RTS games certain units may have more than one type of weapon and
may have more than one type of defense system. In situations such as this
an equation, similar to section 4.22, can be used. For example, let’s assume
the IA has a tank that is being approached by a group of enemy infantry. The
tank is equipped with a .50 caliber machine gun (weak against other vehicles
and strong against infantry), and with a 120mm cannon (weak against
infantry and strong against other vehicles). A decision has to be made. What
weapon should be used against this enemy? By observing the type of unit
approaching and its strengths and vulnerabilities (as seen in section 4.22)
the logical choice is to use the .50 caliber machine gun. This choice would
be made because the tank has determined that it would inflict more damage
4.12 Path-Finding
Path-Finding (PF) refers to the way an IA maps a route to get from point A
between two points. “The A* (pronounced a-star) algorithm will find a path
between two points on a map (Matthews, 2002, p. 105).” This is the best
method for finding the shortest path between two points as it is extremely
14 | P a g e
they are usually not tile-based is by the level designers placing waypoints
represents all the ways in which an NPC can navigate through the world (Lid
n, 2002, p. 211).” However, A*, has two major flaws when it comes to RTS
games.
The first flaw is due to efficiency since, “pathfinding is one of the biggest
CPU concerns for RTS games (Schwab, 2004, p. 100).” This is due to the fact
that most RTS games have hundreds of units usually heading to the same
place at the same time and because of this the efficiency of A* greatly
decreases as it now turns into hundreds of loops for each individual unit to
However, as every agent has to check with every other agent in the flock, it
number of agents in the flock. This is where Simple Swarms (SS) come into
play (The SS algorithm is included within the appendix). Not only do SS save
the computer from doing hundreds of extra calculations it also makes the
allow “for an easy global control system (Scutt, 2002, p. 203).” This is
15 | P a g e
strategy side of the IA can tell one unit where to go and the other units in the
The Second flaw results due to obstacles and is best described in figure
4.13.1. As seen, this can lead to unrealistic game play and since the goal of
modification is easy to implement and will make the PF more realistic. The
one must calculate the entire path before it starts moving and then select
Splice path and can be seen in figure 4.13.2. It is obvious that doing the
entire PF algorithm before the unit starts to move might delay the game.
16 | P a g e
Figure 4.13.1 The A* Algorithm finds the shortest path but due to an
obstacle the path, as it turns out, is not the shortest path and also looks like
a very unrealistic choice.
Figure 4.13.2 The modified A* Algorithm finds the shortest path then sees
an obstacle and calculates the new path which is the most realistic.
17 | P a g e
4.20 Strategy
military command and the planning and conduct of a war (definr, 2008).” In
RTS games, there are an infinite number of possible moves throughout the
(Ponsen, 2004, p. 14)”. The ultimate goal of AI in a RTS game is to win. This
achieved before winning can take place (Ponsen, 2004, p. 14). The problem
in RTS games now becomes identifying the sub-goals? This is because many
game. For example, the enemy might expand its base by building another
base in one game, but not in another game. It becomes the duty of the IA to
entitled to any more information than the human player. Thus, the IA must
described as a queue of events that must be completed for the IA to win the
game. For instance, the IA sends out a scout and identifies a second enemy
base. A new sub-goal will be created and added to the queue to destroy this
18 | P a g e
Figure 4.21.1 A simple sub-goal queue for an IA in a RTS game. Where a
scout has identified a second enemy base and has inserted it to the queue.
before it happens). For example, a scout is sent out to scout the enemy’s
base and reports that it sees a hundred air units. Therefore, the IA is going
to start to build anti-air defense in its base and will also adjust its unit
goals for building management. For example, the AI scout identifies a new
source of gold; two new sub-goals will be created, one will be to build a mine
and the second will be to build a few new workers to gather the gold. In
order to save all the information gathered by the scout, the IA will update the
One key issue with sub-goals, which must be addressed, is; how does the
19 | P a g e
injecting a personality into your IA (O’Brien, 2002, p. 379). For example, if
you wanted your IA to have a defensive style of play you could make it rate
defensive goals higher than offensive goals. However, in most RTS games
there can be more than one way to achieve a certain sub-goal. For instance,
your IA has sent a scout out who has identified that the enemy has a large
first inspection it would seem likely that it would start to build anti-air
will build new fighters instead as they could be used later to attack the
enemy and now to defend against the enemy (O’Brien, 2002, p. 380). As
designing an AI for a RTS game. In the past and even today, many IA’s send
2002, p. 221).” This is extremely dangerous for the units as they can get
introducing influence maps (IM) into the game. An IM is “really just a fancy
20 | P a g e
sample algorithm can be found in section 6.20 for determining the enemy’s
formations.
now make intelligent decisions on how to engage them. The way in which
the IA engages the enemy is based entirely on what type of RTS game it is.
“Most of the armies of Europe had a basic “strategy manual” that told each
general exactly what to do once the enemy’s front, flanks, rear, and so forth
had been identified (Woodcock, 2002, p. 225).” However, this type of AI can
21 | P a g e
become predictable and can only work in a small number of RTS games as
many of these games are fictional and also might take place in the future.
for most RTS games. For instance, the RTS game may have different types of
units such as infantry, tanks, air units, etc. In games such as this it is most
likely that some units have vulnerabilities against certain types of units and
some units may have good defenses for attacks against other types of units.
For a situation like this you want the IA to optimize its chances of winning the
what enemy units (Woodcock, 2002, p. 226). Where Attack is the attack
value of the IA’s unit, Defense is the defense value of the enemy’s unit,
Distance is the distance from between the two units, and Gradient are the
values between the two units in the IM. Once the IA has a list of all the
Option’s, it can send its units in based on the highest value, which will
The technique that this paper will focus on is called Artificial Neural Networks
(ANN). ANN attempt to emulate real Neural Networks within the brains of
22 | P a g e
components of the network, allowing them to learn an optimal or near-
(Tozour, 2002, p. 7).” A sample of an ANN can be seen in figure 4.23.1. ANN
“In order to learn, the network is not told which actions to take but instead
must discover which actions yield the most reward by trying them. If an
action has been successful then the weights are altered to reinforce that
23 | P a g e
There are four main elements of a reinforcement learning system (Sutton
& Barto, 1998). They are a policy, a reward function, a value function, and a
“determines what action the agent can take in each state (Charles et al.,
2008).” For example, a policy in a RTS game could be to attack the enemy
or wait for the enemy to attack first. A reward function provides a reward to
the IA for being in each state. They are usually a floating point number and
This value function measures the long term value of a policy, which in a RTS
order to predict the next state of the environment (Charles et al., 2008, p.
205). In a RTS game this is considered the sub-goals. In order for the IA to
rank optimal decisions based on rewards it has been given in the past, the
Where Qt(a) is the true value of an action at a given time, r is the reward,
and k is the number of rewards. This is just a fancy equation to find the
average reward for an action in order for it to decide which action is the
optimal one. These four main elements form the basis of any type of
24 | P a g e
Within a RTS game reinforcement learning is the only possible way to
learn. This is because “players will gradually adjust their strategy over time
& Parsons, 2007, p. 408).” In addition, there are an infinite number of moves
in a RTS game, thus, reinforcement learning is the only way to go. Most
building AI.
Another reason why EA’s are effective in RTS games is because the weights
can be saved across all games. This means over time the IA is able to
can be adopted for many parts of the AI in a RTS game. It can be used in
exploration, research, etc. For example, most RTS games involve doing
research in order to advance or evolve your race. This research can unlock
new buildings to construct, new units to build, new upgrades for units, etc.
25 | P a g e
Using EA’s, over time, an optimal path of research will be achieved which will
help to maximize the IA’s chances of winning. It is important to note that the
weights used in the EA must still be chosen at random based on the chance
design for a RTS game. This is because anything the IA does is influenced by
this part of your AI but this paper is not concerned with building placement.
dependent upon its personality. For example, a highly aggressive IA will start
to build units before it will start to further expand its base into a second
4.23 . So this section will cover the managing of resources and research.
the IA knows how much money, wood, metal, etc. is needed to construct
every type of building and unit. For instance, figure 4.24.1 shows a simple
resource-chart for a made up game, and the IA has decided it wants to build
50 Air-Units and has $5000, 2000 units of wood, and 3000 units of metal.
26 | P a g e
Unit Type Money Wood Metal
Tank 150 10 100
Air-Unit 250 20 200
Infantry 25 10 5
Figure 4.24.1 A simple resource-chart.
Therefore, the IA needs to know how much money, wood, and metal it will
Cost is the resource cost of the unit, x is the number of units to be created,
and r is the amount of that particular resource the IA has on hand. So for the
above example the IA needs $7500 more, -1000 units of wood, and 7000
units of metal. Therefore, the IA can take some of its workers off of
gathering wood and send more to gather money and metal. However, while
Most strategy games feature research that allows a player to evolve its
empire and to develop new technologies. The paths of this research can be
tech-tree like this can allow the IA to “build toward a goal (Tozour, 2002,
p.354)”. For example, if the IA decided it wanted to build Spearman with the
above tech-tree then it is seen that it first needs to construct a Barracks and
be in the Medieval Age. Not only are tech-tree’s a good way for the IA to see
which research paths it wants to take first in order to attain a goal the
fastest, they can also be used by the IA to identify weak spots in the enemy’s
27 | P a g e
Figure 4.24.2(Tozour, 2002, p. 353) A simple tech-tree.
scout (as seen in section 4.21) and the scout identifies that the enemy has
an archer (using Figure 4.24.2) the IA can work backwards and tell that the
enemy has an archery range and is in the medieval age. This is invaluable
information as the IA can assume the enemy also has Crossbowman and can
also help to influence what type of units and buildings the IA wants to
construct.
28 | P a g e
5.0 Overall Discussion and Conclusion
very short time, human players were able to discern the “thinking patterns”
game. As outlined in this paper, such realism can be achieved when the AI
can be achieved when the programmer breaks the large task of the AI into
smaller tasks as seen in figure 5.0.1. It is important to note that with the
the use of certain Tactics and Strategies the AI is able to achieve a level of
strategy game.
realism that can make a human player feel they are competing against
another human player. Both players are provided with the same
29 | P a g e
information, setting the stage for a competition that is fair and equally
new ways of doing things and tweaking how things were done in the past. In
the military, some of the benefits include data analysis as well as the
30 | P a g e
6.0 Appendix
6.10 A* Algorithm
31 | P a g e
6.20 The Simple Swarms Algorithm
32 | P a g e
6.30 Formation Identification with an Influence Map
1. Determine the center of mass of our units (i.e., the approximate location with
the highest value as determined by the influence map).
2. For every enemy unit we can see:
a. Select the unit that has the greatest “gradient” of influences between
the center of mass of our units and his. We arbitrarily call that the
front. There can be only one front.
b. If a given unit is within two squares of a unit belonging to the
designated front, add it to the front as well.
c. If a given unit is further than two squares from the front and has a
gradient of influences less than that leading from the center of mass of
our units to the front, it is designated as a flank unit. There can be
several flanks.
d. If a given unit is within two squares of a unit belonging to a designated
flank, add it to the flank as well.
e. If the shortest path to a given unit from out center of mass runs
through a square belonging or adjacent to a unit delegated to the
front, that unit is designated as part of the enemy’s rear. There can be
more than one rear.
f. If a given unit is within two squares of a unit belonging to an area
designated as the rear, add it to the rear as well.
3. Any unit that isn’t allocated to one of the preceding groups (front, flank, or
rear) is treated independently and can be considered an individual unit.
There can be any number of individuals.
(Woodcock, 2002, p. 224)
33 | P a g e
(Woodcock, 2002, p. 221)
In total, only 22 individuals were surveyed. However, it is felt these results are
accurate as a wide range of people were surveyed in terms of age, gaming
experience, and location.
34 | P a g e
Note: This was an open ended question and keywords were selected from
responses.
7.0 References
35 | P a g e
alexjc. (2008, February 20). Inquiry Idea’s [Msg 12]. Message posted to
http://aigamedev.com/forums/showthread.php?t=343&page=2
Skillings, J. (2006). Getting Machines to Think Like Us. Cnet News. Retrieved March
20, 2008, from http://www.news.com/Getting-machines-to-think-like-us/2008-
11394_3-6090207.html
Staff. (2005). Age of Empires III Q&A – Technology Overview. Retrieved March
18, 2008, from
36 | P a g e
http://au.gamespot.com/pc/strategy/ageofempiresiii/news.html?sid=6120
033&page=1
Tuyls K., Parsons S. (2007). What evolutionary game theory tells us about
multiagent learning. Artificial Intelligence, 171(2007), p. 406-416.
37 | P a g e