Está en la página 1de 37

Can a Realistic Artificial Intelligence be created for

a Real-Time Strategy Game?


by
Dane Anderson

Submitted to Professor C. Churchill


in partial fulfilment of requirements for
completion of
Comp. Tech. 3IN3
Bachelor of Technology Programme
McMaster University
April 2008
“Every game of skill is susceptible of being played by an automation.”
– Charles Babbage

2 | Page
Abstract

The purpose of this paper is to answer the central question, “Can a

Realistic Artificial Intelligence be created for a Real-Time Strategy Game?” A

brief framework on how to achieve a realistic real-time strategy (RTS) game

artificial intelligence (AI) is herein outlined in order to demonstrate how RTS

games enable the AI to make optimal decisions in real-time. In order to

gather information and gain insight into what type of artificial player a

human player wanted to compete against, a survey was created. In addition,

17 sources were examined in order to explain how to achieve a realistic RTS

game AI. As a result of the data analysis and research, it was found that in

order to create a realistic RTS game AI the AI design must be broken down

into two main categories: tactics and strategy. Also, those categories must

be broken down even further into other sub-categories. The “tactics”

category consists of combat and path-finding. The “strategy” category

consists of sub-goal identification, engaging the enemy, learning, and

economy management. The evidence is conclusive; it is possible to create a

realistic AI for a RTS game. This can be achieved by implementing the basic

framework as outlined above. Since it is possible to create a realistic AI for a

RTS game it is also possible for other types of applications such as military to

adapt this research and use it for its own applications. Future research in

RTS game AI is infinite as the possibilities are endless. This is because no

3 | Page
two RTS games are alike. Even new games within the same series such as

“Warcraft” to “Warcraft II” will have new features which will require AI game

programmers to modify current AI techniques and to also invent new AI

techniques in order to adapt to these new features.

4 | Page
Table of Contents

Abstract.................................................................................................... ........3
1.0 Introduction.................................................................................. ..............6
1.10 Major Terms Defined..............................................................................6
1.11 Artificial Intelligence..........................................................................6
1.12 Intelligent Agent................................................................................6
1.13 Realistic............................................................................................. .6
1.14 Real-Time Strategy Game................................................................... 7
1.20 History............................................................................................... ....7
1.21 History of Artificial Intelligence..........................................................7
1.22 History of Computer Games...............................................................8
1.23 History of Real-Time Strategy Game Artificial Intelligence.................9
1.30 Research Strategy...............................................................................10
2.0 Central Question......................................................................................10
2.10 Importance................................................................................ ..........11
2.20 Scope..................................................................................................12
3.0 Anticipated Findings.................................................................................12
4.0 Evidence and Discussion of Findings........................................................13
4.10 Tactics.................................................................................................13
4.11 Combat............................................................................................13
4.12 Path-Finding.....................................................................................14
4.20 Strategy..............................................................................................18
4.21 Sub-Goal Identification via Scouting................................................18
4.22 Engaging the Enemy........................................................................20
4.23 Learning with Artificial Neural Networks..........................................22
4.24 Economy Management....................................................................26
5.0 Overall Discussion and Conclusion...........................................................29
5.10 Future Research .................................................................................30
6.0 Appendix..................................................................................................31
6.10 A* Algorithm.................................................................................. ......31
6.20 The Simple Swarms Algorithm............................................................32
6.30 Formation Identification with an Influence Map...................................33

5 | Page
6.40 Survey Results..................................................................................... 34
7.0 References...............................................................................................35

1.0 Introduction

1.10 Major Terms Defined

1.11 Artificial Intelligence


Many authors use the term Artificial Intelligence (AI) to describe an

Intelligent Agent. For example, Schwab explains, it is a system that gathers

knowledge and then acts upon that knowledge (Schwab, 2004, p. 3). This is

a popular misconception as “Artificial Intelligence is the study of the design

of intelligent agents (Poole, 1998, p. 1).”

1.12 Intelligent Agent


An Intelligent Agent (IA) is a computer program that is capable of

autonomous action within a given environment to meet its design objectives

(Weiss, 1999, p. 29). For the scope of this paper, the environment assigned

to the IA is a computer game. The overall design objective of the IA is to win

the game. This design objective will be broken-down into sub objectives as

will be discussed later.

1.13 Realistic
Within the scope of this paper realistic refers to the ability of an IA to

make a human player believe they are playing against another human

player. In particular, my survey found that people want to compete against

an opponent that is unpredictable, challenging, and responsive (survey

6 | Page
results are in the appendix). In addition, the survey concluded that people

do not want the IA to cheat (having knowledge of things the human has no

knowledge of) they want the IA to have the same amount of knowledge as

they do.

1.14 Real-Time Strategy Game


A Real-Time Strategy (RTS) Game is a war-game that is playable in real-

time. Also, RTS games are “generally understood to be in the harvest, build,

destroy mode (Geryk, 1998).” Examples of RTS games are Blizzard

Entertainments “StarCraft” and “WarCraft”, Westwood Studios “Command &

Conquer” series, and Ensemble Studios “Age of Empires” series. The general

objective of these games is to manage troops and resources as efficiently as

possible and to develop strategies to overcome artificially intelligent or real

opponents (Charles, Fyfe, Livingstone, & McGlinchey, 2008, p.5).

Specifically, it is not a turn based war-game game such as Sid Meier’s

“Civilization” Series. In addition, it is not a real-time game such as Electronic

Arts “SimCity” or Lionhead Studio’s game “Black and White” as these games

fall under the genres of “city-building” games and “god games” respectively.

1.20 History

1.21 History of Artificial Intelligence


The concept of Artificial Intelligence appeared June 13, 1863 in a New

Zealand newspaper. It was entitled “Darwin among the Machines” and was

7 | Page
written by a man named Cellarius. The most provocative statement made in

the article was that one day machines would, “hold supremacy over the

world and its inhabitants” (Butler, 1914, p. 185). However, it was not until

1956 when a computer scientist named John McCarthy actually coined the

term “Artificial Intelligence” in order to describe what he was going to be

researching (Skillings, 2006).

1.22 History of Computer Games


Discerning the actual date of the very first computer game ever created is

a highly debated topic. There was a patent filed on January 25, 1947 that

describes a missile simulation game utilizing a cathode ray tube (Goldsmith,

1948). However, another source found states that the first game ever

created was in 1958 by William Higinbotham who worked at the Brookhaven

National Laboratory (Charles et al., 2008, p. 2). His game, “Tennis for Two”,

was “made by wiring an oscilloscope up to an analogue computer (Charles et

al., 2008).” To further add to the confusion over the first computer game

ever created, comes another contender—“Space War!” (Schwab, 2004, p. 5).

This game is generally regarded as the first computer game ever created as

it was made on a MIT PDP-1 mainframe computer; which was a digital

computer, whereas the above games were all built on analog machines

(Charles et al., 2008).

8 | Page
However, it was not until the 1970’s when computer game programmers

started incorporating AI into their computer games (Schwab, 2004, p. 5).

First attempts were rudimentary at best as the agent’s did not seem to make

decisions on their own. These agents followed set patterns and paths based

upon a players actions (Schwab, 2004, p. 5). AI in computer games has

come a long way since then but is still leaps and bounds from perfection. In

today’s computer games a player can expect to play against or with a

realistic computer player who not only reacts to the actions of that player,

but also seems to make its own decisions. The “predictability” of the earlier

games is no longer a feature.

1.23 History of Real-Time Strategy Game Artificial Intelligence


As a new genre, RTS was introduced in 1989 as a way to describe a new

emerging type of game. The game was for the Sega Genesis console and

was called, “Herzog Zwei”, which was a “two-player game in which the

object[ive] [was] to destroy the enemy’s base (Geryk, 1998).” This game

was also the first RTS game to incorporate a basic level of AI. As Schwab

explains, “the world gets it first taste of bad pathfinding (Schwab, 2004, p.

5)”. Ever since then AI has been improving in RTS games. Games such as,

“Command & Conquer Red Alert” and “StarCraft” paved the way for

innovative AI in RTS games. Today, we have games such as “WarCraft III”

9 | Page
and “Age of Empires III” that incorporate various advanced AI techniques that

fool a human player into believing they are playing against another human

player. Dave Pottinger, lead programmer of Age of Empires III and director of

technology at Ensemble Studios, explains. “Our AI players have always

played a fair game (Staff, 2005).” Not only is the human player now fooled

into believing they are playing another human, the AI achieves this fairly by

having the same amount of knowledge as the human; no cheating!

1.30 Research Strategy

The research strategy first adopted for this paper was to use a variety of

resource such as journal articles, textbooks, books, papers, online discussion

forums, and an interview. After asking members of the online discussion

forum (www.aigamedev.com) for the specific direction I should take for the

purpose of this paper, I also asked if I could interview someone who works in

the industry. The website’s administrator suggested to “put up a survey with

one of the free online tools … If you ask open-ended questions too, it’ll give

you some good interview-like material (alexjc, 2008, p.2).” As suggested, a

survey was created which helped gather perceptual data as to what type of

AI people generally wanted to play against.

2.0 Central Question

The central question (CQ) of this paper is, “Can a Realistic Artificial

Intelligence be created for a Real-Time Strategy Game?” First, the

10 | P a g e
topic of AI was chosen because I am extremely interested in it. Also, AI is

always being further developed and studied, providing a plethora of

information. After choosing a general topic the first CQ was chosen, which

was, “Can a Realistic Artificial Intelligence be created in a Computer Game?”

After a discussion with the professor, he suggested that the CQ needed to be

further narrowed down. It was decided that instead of focusing on all

computer game’s that a specific genre would be chosen. The genre chosen

was the RTS genre as I am very interested in this type of game and they are

also extremely complex. This high degree of complexity is what drew me in

to focus my research on this genre.

2.10 Importance

The study of AI in RTS games is an extremely important topic, not only for

gamers, but for the entire world. Specifically, government and industry are

beginning to benefit from this type of research as RTS game AI has to be able

to make optimal decisions in real-time. Conclusive research based on RTS

game AI is directly transferable into these other types of applications. For

example, military can take this type of AI and program it directly into

unmanned aircrafts, which could save lives.

11 | P a g e
2.20 Scope

It is important to define the scope of this paper as the topic of AI in a RTS

game contains enough material to fill hundreds of books. The main intention

of this paper is to answer the CQ by providing an overview of how to achieve

a realistic RTS game AI. It is important to note that there are many ways of

achieving this and this paper is aimed to discover and present the most

modern techniques. In addition, this paper is geared to be as general as

possible as there are many different types of RTS games. This paper is

intended to provide the framework for creating a realistic RTS game AI.

3.0 Anticipated Findings

It is anticipated that it is possible to create a realistic AI. In particular, the

AI will be able to fully emulate a human and fool a human player into

believing they are playing against another human. In addition, this will be

achieved by not privileging the AI to any additional information than what is

available to a human player.

12 | P a g e
4.0 Evidence and Discussion of Findings

Before getting into specifics it is important to consider what is involved in

any type of war. A General is usually the one who orchestrates an entire war.

In order to win a war the General must consider many variables; however, it

is felt that all of these variables can be categorized into two separate

categories: tactics and strategy. Therefore, we can consider the IA the

General of the Artificially Intelligent army. It is important to split the IA into

these two separate branches which are tactics and strategy. The reason for

splitting the AI into these two separate categories is that it then becomes

possible to emulate another human realistically because this best describes

a human’s thought process.

4.10 Tactics

Tactics is defined as, “the branch of military science dealing with detailed

maneuvers to achieve objectives set by strategy (definr, 2008).” The tactics’

category can then be further broken into smaller, more manageable sub-

categories such as combat and path-finding. However, there are more

categories, but this paper will focus on these two sub-categories as they are

the main ones.

4.11 Combat
Combat will refer to the act of individual units choosing what attacks and

defenses to implement against different enemies. This is not to be confused

13 | P a g e
with the Strategy side where the Strategy IA is telling the units where and

what to attack. In most RTS games a particular unit only has one way of

attacking and one way of defending as this is the easiest way to code the AI

as you only have one Attack value and one Defense value. However, in

newer RTS games certain units may have more than one type of weapon and

may have more than one type of defense system. In situations such as this

an equation, similar to section 4.22, can be used. For example, let’s assume

the IA has a tank that is being approached by a group of enemy infantry. The

tank is equipped with a .50 caliber machine gun (weak against other vehicles

and strong against infantry), and with a 120mm cannon (weak against

infantry and strong against other vehicles). A decision has to be made. What

weapon should be used against this enemy? By observing the type of unit

approaching and its strengths and vulnerabilities (as seen in section 4.22)

the logical choice is to use the .50 caliber machine gun. This choice would

be made because the tank has determined that it would inflict more damage

to the approaching infantry.

4.12 Path-Finding
Path-Finding (PF) refers to the way an IA maps a route to get from point A

to point B. Generally, the IA is concerned with finding the shortest path

between two points. “The A* (pronounced a-star) algorithm will find a path

between two points on a map (Matthews, 2002, p. 105).” This is the best

method for finding the shortest path between two points as it is extremely

efficient. A way in which A* is generally implemented into RTS games as

14 | P a g e
they are usually not tile-based is by the level designers placing waypoints

throughout the map. “These waypoints serve as nodes in a node-graph that

represents all the ways in which an NPC can navigate through the world (Lid

n, 2002, p. 211).” However, A*, has two major flaws when it comes to RTS

games.

The first flaw is due to efficiency since, “pathfinding is one of the biggest

CPU concerns for RTS games (Schwab, 2004, p. 100).” This is due to the fact

that most RTS games have hundreds of units usually heading to the same

place at the same time and because of this the efficiency of A* greatly

decreases as it now turns into hundreds of loops for each individual unit to

find its way. Generally, in the past, AI programmers have utilized a

technique called Flocking which is “perfect for simulating the naturalistic

behavior of small to medium numbers of creatures (Scutt, 2002, p.202).

However, as every agent has to check with every other agent in the flock, it

results in distance calculations ( ), where is the

number of agents in the flock. This is where Simple Swarms (SS) come into

play (The SS algorithm is included within the appendix). Not only do SS save

the computer from doing hundreds of extra calculations it also makes the

units appear to be grouped together as they would be in a real army. This

grouping of units is known as a formation. In addition to being efficient, SS,

allow “for an easy global control system (Scutt, 2002, p. 203).” This is

because instead of having hundreds of units roam around independently, the

15 | P a g e
strategy side of the IA can tell one unit where to go and the other units in the

swarm will follow it.

The Second flaw results due to obstacles and is best described in figure

4.13.1. As seen, this can lead to unrealistic game play and since the goal of

this paper is to achieve realism, a modification to A* must be made. This

modification is easy to implement and will make the PF more realistic. The

modification to A* is that instead of IA calculating its path while it is moving,

one must calculate the entire path before it starts moving and then select

the shortest route(Higgins, 2002, p. 131). This modification is known as the

Splice path and can be seen in figure 4.13.2. It is obvious that doing the

entire PF algorithm before the unit starts to move might delay the game.

However, as we are using SS, as stated above, this is no longer a problem

because there will only be a few units doing PF.

16 | P a g e
Figure 4.13.1 The A* Algorithm finds the shortest path but due to an
obstacle the path, as it turns out, is not the shortest path and also looks like
a very unrealistic choice.

Figure 4.13.2 The modified A* Algorithm finds the shortest path then sees
an obstacle and calculates the new path which is the most realistic.

17 | P a g e
4.20 Strategy

Strategy is defined as, “the branch of military science dealing with

military command and the planning and conduct of a war (definr, 2008).” In

RTS games, there are an infinite number of possible moves throughout the

entire game. “Therefore, goal-directed (backward) reasoning is preferred

(Ponsen, 2004, p. 14)”. The ultimate goal of AI in a RTS game is to win. This

goal is impossible to achieve by itself. Thus, sub-goals must be created and

achieved before winning can take place (Ponsen, 2004, p. 14). The problem

in RTS games now becomes identifying the sub-goals? This is because many

situations may or may not be created within the timeframe of a certain

game. For example, the enemy might expand its base by building another

base in one game, but not in another game. It becomes the duty of the IA to

identify these sub-goals and to make intelligent decisions based on them.

4.21 Sub-Goal Identification via Scouting


Scouting is the main technique employed in identifying sub-goals. As

stated before the AI should be as realistic as possible and should not be

entitled to any more information than the human player. Thus, the IA must

scout to gain information to identify sub-goals. Sub-goals can be best

described as a queue of events that must be completed for the IA to win the

game. For instance, the IA sends out a scout and identifies a second enemy

base. A new sub-goal will be created and added to the queue to destroy this

enemy’s second base. Figure 4.21.1 demonstrates the above situation.

18 | P a g e
Figure 4.21.1 A simple sub-goal queue for an IA in a RTS game. Where a
scout has identified a second enemy base and has inserted it to the queue.

Not only is scouting important to identify sub-goals, it also allows the IA to

identify these sub-goals early in order to be proactive i.e.(identify an attack

before it happens). For example, a scout is sent out to scout the enemy’s

base and reports that it sees a hundred air units. Therefore, the IA is going

to start to build anti-air defense in its base and will also adjust its unit

building weights, as outlined in section 4.23, to influence the building of units

that are strong against air units.

In addition to the military benefits, scouting can allow the IA to identify

goals for building management. For example, the AI scout identifies a new

source of gold; two new sub-goals will be created, one will be to build a mine

and the second will be to build a few new workers to gather the gold. In

order to save all the information gathered by the scout, the IA will update the

influence map as outlined in section 4.22.

One key issue with sub-goals, which must be addressed, is; how does the

IA decide how to prioritize its sub-goals? One way to prioritize them is by

19 | P a g e
injecting a personality into your IA (O’Brien, 2002, p. 379). For example, if

you wanted your IA to have a defensive style of play you could make it rate

defensive goals higher than offensive goals. However, in most RTS games

there can be more than one way to achieve a certain sub-goal. For instance,

your IA has sent a scout out who has identified that the enemy has a large

air-force, therefore, it adds to build air-defense to its sub-goal queue. Upon

first inspection it would seem likely that it would start to build anti-air

buildings. However, the personality of the IA is that it is aggressive, thus, it

will build new fighters instead as they could be used later to attack the

enemy and now to defend against the enemy (O’Brien, 2002, p. 380). As

seen, adding a personality to your IA can make your AI more enjoyable to

play against and less predictable.

4.22 Engaging the Enemy


How to intelligently engage the enemy is one of the biggest problems when

designing an AI for a RTS game. In the past and even today, many IA’s send

units in unintelligently by “trickling units toward the opposition (Woodcock,

2002, p. 221).” This is extremely dangerous for the units as they can get

killed easily one at a time. However, this problem is overcome by

introducing influence maps (IM) into the game. An IM is “really just a fancy

name for grid-based map attributes (Schwab, 2004, p. 101).” A sample IM

can be seen in Figure 4.22.1. Figure 4.22.1 is a visual drawing, whereas, a

real influence map is stored on the computer’s memory as a 2D array. A

20 | P a g e
sample algorithm can be found in section 6.20 for determining the enemy’s

formations.

Once the enemy’s formations can be intelligently determined, the IA can

now make intelligent decisions on how to engage them. The way in which

the IA engages the enemy is based entirely on what type of RTS game it is.

Figure 4.22.1 (Woodcock, 2002, p. 221) An influence map


demonstrating formations of units about to engage the enemy (white).

“Most of the armies of Europe had a basic “strategy manual” that told each

general exactly what to do once the enemy’s front, flanks, rear, and so forth

had been identified (Woodcock, 2002, p. 225).” However, this type of AI can

21 | P a g e
become predictable and can only work in a small number of RTS games as

many of these games are fictional and also might take place in the future.

A modification to the “tried and true” rules of engagement must be made

for most RTS games. For instance, the RTS game may have different types of

units such as infantry, tanks, air units, etc. In games such as this it is most

likely that some units have vulnerabilities against certain types of units and

some units may have good defenses for attacks against other types of units.

For a situation like this you want the IA to optimize its chances of winning the

battle by exploiting these vulnerabilities. An equation such as

can be used to identify which units should attack

what enemy units (Woodcock, 2002, p. 226). Where Attack is the attack

value of the IA’s unit, Defense is the defense value of the enemy’s unit,

Distance is the distance from between the two units, and Gradient are the

values between the two units in the IM. Once the IA has a list of all the

Option’s, it can send its units in based on the highest value, which will

maximize its chances of winning.

4.23 Learning with Artificial Neural Networks


There are many techniques that have been developed for AI’s to learn.

The technique that this paper will focus on is called Artificial Neural Networks

(ANN). ANN attempt to emulate real Neural Networks within the brains of

humans and animals. “Neural networks operate by repeatedly adjusting the

internal numeric parameters (or weights) between interconnected

22 | P a g e
components of the network, allowing them to learn an optimal or near-

optimal response for a wide variety of different classes of learning tasks

(Tozour, 2002, p. 7).” A sample of an ANN can be seen in figure 4.23.1. ANN

can be categorized into three major categories: they are supervised,

unsupervised, and reinforcement learning (Charles et al., 2008, p. 21). The

type we are concerned with for RTS games is reinforcement learning.

Reinforcement attempts to optimize a particular situation by trial-and-error.

“In order to learn, the network is not told which actions to take but instead

must discover which actions yield the most reward by trying them. If an

action has been successful then the weights are altered to reinforce that

behavior otherwise that action is discouraged in the modification of the

weights (Charles et al., 2008, p. 22).”

Figure 4.23.1 “A typical artificial neural network consisting of 3 layers of


neurons and 2 connecting layers of weights (Charles et al., 2008, p. 17).”

23 | P a g e
There are four main elements of a reinforcement learning system (Sutton

& Barto, 1998). They are a policy, a reward function, a value function, and a

model of the environment (Charles et al., 2008, p. 203). A policy

“determines what action the agent can take in each state (Charles et al.,

2008).” For example, a policy in a RTS game could be to attack the enemy

or wait for the enemy to attack first. A reward function provides a reward to

the IA for being in each state. They are usually a floating point number and

can be positive (good) or negative (bad) (Charles et al., 2008). In a RTS

game an example of a reward function can be as seen in figure 4.23.2. A

value function, , where s is the state, policy

is n, En is expected value when using policy n (Charles et al., 2008, p. 204).

This value function measures the long term value of a policy, which in a RTS

game is the best action. A model of the environment must be produced in

order to predict the next state of the environment (Charles et al., 2008, p.

205). In a RTS game this is considered the sub-goals. In order for the IA to

rank optimal decisions based on rewards it has been given in the past, the

following equation can be used, (Charles et al., 2008, p. 205).

Where Qt(a) is the true value of an action at a given time, r is the reward,

and k is the number of rewards. This is just a fancy equation to find the

average reward for an action in order for it to decide which action is the

optimal one. These four main elements form the basis of any type of

reinforcement learning system.

24 | P a g e
Within a RTS game reinforcement learning is the only possible way to

learn. This is because “players will gradually adjust their strategy over time

in response to repeated observations of their own and others’ payoffs (Tuyls

& Parsons, 2007, p. 408).” In addition, there are an infinite number of moves

in a RTS game, thus, reinforcement learning is the only way to go. Most

sources refer to this type of ANN as an Evolutionary Algorithm (EA). An

example of an EA can be seen in Figure 4.23.2.

Figure 4.23.2 A visual example of an Evolutionary Algorithm for a unit

building AI.

Another reason why EA’s are effective in RTS games is because the weights

can be saved across all games. This means over time the IA is able to

become more intelligent as the optimal solution is reached. This EA concept

can be adopted for many parts of the AI in a RTS game. It can be used in

other applications within a RTS game such as constructing buildings,

exploration, research, etc. For example, most RTS games involve doing

research in order to advance or evolve your race. This research can unlock

new buildings to construct, new units to build, new upgrades for units, etc.

25 | P a g e
Using EA’s, over time, an optimal path of research will be achieved which will

help to maximize the IA’s chances of winning. It is important to note that the

weights used in the EA must still be chosen at random based on the chance

value in order to maintain a small amount of randomness within the game.

4.24 Economy Management


Economy management (EM) can be thought of as the heart of your AI

design for a RTS game. This is because anything the IA does is influenced by

the economy. EM refers to how your IA manages its economy. More

specifically, it deals with managing buildings, units, resources, and research.

It is important to note that the placement of buildings would be dealt with by

this part of your AI but this paper is not concerned with building placement.

As seen in section 4.21, how your IA manages its economy is highly

dependent upon its personality. For example, a highly aggressive IA will start

to build units before it will start to further expand its base into a second

base. In addition, unit management has already been covered in section

4.23 . So this section will cover the managing of resources and research.

The main goal of the IA in regards to resource management is to attempt

to always have the requested amounts of resources available to construct

buildings and units. This can be achieved by setting up a ‘resource-chart’ so

the IA knows how much money, wood, metal, etc. is needed to construct

every type of building and unit. For instance, figure 4.24.1 shows a simple

resource-chart for a made up game, and the IA has decided it wants to build

50 Air-Units and has $5000, 2000 units of wood, and 3000 units of metal.

26 | P a g e
Unit Type Money Wood Metal
Tank 150 10 100
Air-Unit 250 20 200
Infantry 25 10 5
Figure 4.24.1 A simple resource-chart.

Therefore, the IA needs to know how much money, wood, and metal it will

need to achieve this. The equation, , can be used where

Cost is the resource cost of the unit, x is the number of units to be created,

and r is the amount of that particular resource the IA has on hand. So for the

above example the IA needs $7500 more, -1000 units of wood, and 7000

units of metal. Therefore, the IA can take some of its workers off of

gathering wood and send more to gather money and metal. However, while

the IA is gathering up more resources it can start to build the Air-Units.

Most strategy games feature research that allows a player to evolve its

empire and to develop new technologies. The paths of this research can be

shown in a tech-tree. A simple tech tree can be seen in figure 4.24.2. A

tech-tree like this can allow the IA to “build toward a goal (Tozour, 2002,

p.354)”. For example, if the IA decided it wanted to build Spearman with the

above tech-tree then it is seen that it first needs to construct a Barracks and

be in the Medieval Age. Not only are tech-tree’s a good way for the IA to see

which research paths it wants to take first in order to attain a goal the

fastest, they can also be used by the IA to identify weak spots in the enemy’s

strategy (Tozour, 2002, p.354). For example, if the IA has a

27 | P a g e
Figure 4.24.2(Tozour, 2002, p. 353) A simple tech-tree.

scout (as seen in section 4.21) and the scout identifies that the enemy has

an archer (using Figure 4.24.2) the IA can work backwards and tell that the

enemy has an archery range and is in the medieval age. This is invaluable

information as the IA can assume the enemy also has Crossbowman and can

also help to influence what type of units and buildings the IA wants to

construct.

28 | P a g e
5.0 Overall Discussion and Conclusion

The earlier games prior to the inclusion of AI were very predictable. In a

very short time, human players were able to discern the “thinking patterns”

of the IA, resulting in boredom. More modern advancements have made it

possible to create a realistic artificial intelligence in a real-time strategy

game. As outlined in this paper, such realism can be achieved when the AI

programmer follows the framework outlined in this paper. Greater realism

can be achieved when the programmer breaks the large task of the AI into

smaller tasks as seen in figure 5.0.1. It is important to note that with the

above break-down it is possible to break each category down further. With

the use of certain Tactics and Strategies the AI is able to achieve a level of

Figure 5.0.1 The Basic framework of an artificial intelligence in a real-time

strategy game.

realism that can make a human player feel they are competing against

another human player. Both players are provided with the same

29 | P a g e
information, setting the stage for a competition that is fair and equally

challenging for both opponents.

5.10 Future Research

The scope of this paper was to answer the central question by

demonstrating a basic framework that clearly outlines how to achieve

realistic artificial intelligence in a real-time strategy game. This basic

framework lends itself to many different aspects of future research, as

advances will continuously be made. The amount of future research is

infinite as each RTS game is different and requires AI programmers to invent

new ways of doing things and tweaking how things were done in the past. In

addition to gaming, current research is already proving beneficial to many

other fields, including industry and government agencies such as the

military. In industry and manufacturing, real time AI is being used to

increase productivity, operational safety and production, and efficiency. In

the military, some of the benefits include data analysis as well as the

operation of unmanned aircrafts.

30 | P a g e
6.0 Appendix

6.10 A* Algorithm

1. Let P = the starting point.


2. Assign f, g and h value to P.
3. Add P to the Open list. At this point, P is the only node on the Open list.
4. Let B = the best node from the Open list(best node has the lowest f-value).
a. If B is the goal node, then quit – a path has been found.
b. If the Open list is empty, the quit – a path cannot be found.
5. Let C = a valid node connected to B.
a. Assign f, g, and h values to C.
b. Check whether C is on the Open or Closed list.
i. If so, check whether the new path is more efficient (lower f-
value).
1. If so, update the path.
ii. Else, add C to the Open list.
c. Repeat step 5 for all valid children of B.
6. Repeat from step 4.

(Matthews, 2002, p. 107)

This is an example for a tile-based game.

31 | P a g e
6.20 The Simple Swarms Algorithm

1. For each unit, store its current position.


2. Update its x and z position according to its speed and heading (y-rotation),
and its y position by adding its fall speed.
3. Change the units fall speed due to gravity.
4. Calculate the difference between the agent’s position and the targets position
to give delta values for x, y, and z.
5. Calculate the relative angle of the target to the agent (
).
6. If unit is fleeing from target negate this angle.
7. If enemy is within a certain distance of unit then damage enemy.
8. If the agent is not falling then its steering and heading are adjusted
depending if it is in the inner or outer zone (if(abs(dz) + abs(dx) >
SWARM_RANGE), it is in the outer zone).
9. If unit is in the outer zone increment the unit’s speed if it is less than its
maximum. If unit is heading the correct direction (abs(angle) <
UNIT_LOCK_ANGLE) give the unit’s y-rotation a small amount of waver;
otherwise, change its y-rotation rapidly toward the correct heading.
10.If unit is in the inner zone, change the unit’s y-rotation depending on its
speed, and change its speed depending on its angle. E.g.
if(unit->speed & 0x1){
unit ->yrot += SWIRL_ANGLE;
}
else{
unit ->yrot -= SWIRL_ANGLE;
}
unit ->speed = 48 – (abs(angle) >> 10);

(Scutt, 2002, p. 204-205)

32 | P a g e
6.30 Formation Identification with an Influence Map

1. Determine the center of mass of our units (i.e., the approximate location with
the highest value as determined by the influence map).
2. For every enemy unit we can see:
a. Select the unit that has the greatest “gradient” of influences between
the center of mass of our units and his. We arbitrarily call that the
front. There can be only one front.
b. If a given unit is within two squares of a unit belonging to the
designated front, add it to the front as well.
c. If a given unit is further than two squares from the front and has a
gradient of influences less than that leading from the center of mass of
our units to the front, it is designated as a flank unit. There can be
several flanks.
d. If a given unit is within two squares of a unit belonging to a designated
flank, add it to the flank as well.
e. If the shortest path to a given unit from out center of mass runs
through a square belonging or adjacent to a unit delegated to the
front, that unit is designated as part of the enemy’s rear. There can be
more than one rear.
f. If a given unit is within two squares of a unit belonging to an area
designated as the rear, add it to the rear as well.
3. Any unit that isn’t allocated to one of the preceding groups (front, flank, or
rear) is treated independently and can be considered an individual unit.
There can be any number of individuals.
(Woodcock, 2002, p. 224)

33 | P a g e
(Woodcock, 2002, p. 221)

6.40 Survey Results

In total, only 22 individuals were surveyed. However, it is felt these results are
accurate as a wide range of people were surveyed in terms of age, gaming
experience, and location.

34 | P a g e
Note: This was an open ended question and keywords were selected from
responses.

7.0 References

35 | P a g e
alexjc. (2008, February 20). Inquiry Idea’s [Msg 12]. Message posted to
http://aigamedev.com/forums/showthread.php?t=343&page=2

Babbage, C. (1864). Passages From The Life Of A Philosopher. London:


Longman, Green, Longman, Roberts & Green.

Charles, D., Fyfe, C., Livingstone, D., McGlinchey, S. (2008). Biologically


Inspired Artificial Intelligence for Computer Games. Medical Information
Science Reference.

Geryk, B. (1998). A History of Real-Time Strategy Games. Gamespot.


Retrieved March 18, 2008, from
http://www.gamespot.com/gamespot/features/all/real_time/index.html

Higgins, D. (2002). Pathfinding Design Architecture. AI Game Programming


Wisdom (ed. S. Rabin), Charles River Media, 2002, pp. 122-132.

Lid n, L. (2002). Strategic and Tactical Reasoning with Waypoints. AI Game


Programming Wisdom (ed. S. Rabin), Charles River Media, 2002, pp. 211-
220.

Matthews, J. (2002). Basic A* Pathfinding Made Simple. AI Game


Programming Wisdom (ed. S. Rabin), Charles River Media, 2002, pp. 101-
113.

O’Brien, J. (2002). A Flexible Goal-Based Planning Architecture. AI Game


Programming Wisdom (ed. S. Rabin), Charles River Media, 2002, pp. 375-
383.

Poole, D., Mackworth, A., & Goebel, R. (1998), Computational Intelligence: A


Logical Approach, Oxford University Press.

Schwab, B. (2004). AI Game Engine Programming. Hinghan, Massachusetts:


Charles River Media.

Scutt, T. (2002). Simple Swarms as an Alternative to Flocking. AI Game


Programming Wisdom (ed. S. Rabin), Charles River Media, 2002, pp. 202-
208.

Skillings, J. (2006). Getting Machines to Think Like Us. Cnet News. Retrieved March
20, 2008, from http://www.news.com/Getting-machines-to-think-like-us/2008-
11394_3-6090207.html

Staff. (2005). Age of Empires III Q&A – Technology Overview. Retrieved March
18, 2008, from

36 | P a g e
http://au.gamespot.com/pc/strategy/ageofempiresiii/news.html?sid=6120
033&page=1

Strategy. definr.com. Retrieved March 18, 2008, from http://definr.com/strategy

Tactics . definr.com. Retrieved March 18, 2008, from http://definr.com/tactics

Tozour, P. (2002). Introduction to Bayesian Networks and Reasoning Under


Uncertainty. AI Game Programming Wisdom (ed. S. Rabin), Charles River
Media, 2002, pp. 345-357.

Tozour, P. (2002). Strategic and Tactical Reasoning with Waypoints. AI Game


Programming Wisdom (ed. S. Rabin), Charles River Media, 2002, pp. 3-15.

Tuyls K., Parsons S. (2007). What evolutionary game theory tells us about
multiagent learning. Artificial Intelligence, 171(2007), p. 406-416.

Weiss, G. (Ed.). (1999). Multiagent Systems: A Modern Approach to


Distributed Modern Approach to Artificial Intelligence. Cambridge,
Massachusetts: The MIT Press.

37 | P a g e

También podría gustarte