Está en la página 1de 27

Today

AI
Fuzzy Logic
Neural Nets

11/6/2001

CS 638, Fall 2001

Fuzzy Logic
Philosophical approach
Ontological commitment based on degree of truth
Is not a method for reasoning under uncertainty

Crisp Facts distinct boundaries


Fuzzy Facts imprecise boundaries
Probability - incomplete facts
Example Scout reporting an enemy
Two to three tanks at grid NV 123456 (Crisp)
A few tanks at grid NV 123456 (Fuzzy)
There might be 2 tanks at grid NV 54 (Probabilistic)

11/6/2001

CS 638, Fall 2001

Apply to Computer Games


Can have different characteristics of players

Strength: strong, medium, weak


Aggressiveness: meek, medium, nasty
If meek and attacked, run away fast
If medium and attacked, run away slowly
If nasty and strong and attacked, attack back

Control of a vehicle
Should slow down when close to car in front
Should speed up when far behind car in front

Provides smoother transitions not a sharp boundary


11/6/2001

CS 638, Fall 2001

Fuzzy Sets
Classical set theory
An object is either in or not in the set

Sets with smooth boundary


Not completely in or out somebody 6 is 80% tall

Fuzzy set theory

An object is in a set by matter of degree


1.0 => in the set
0.0 => not in the set
0.0 < object < 1.0 => partially in the set

Provides a way to write symbolic rules with terms like


medium but evaluate them in a quantified way
11/6/2001

CS 638, Fall 2001

Example Fuzzy Variable

Membership
(Degree of Truth)
Meek

1.0

Each function tells us how


much we consider a
character in the set if it has
a particular aggressiveness
value
Or, how much truth to
attribute to the statement:
The character is nasty (or
meek, or neither)?

Nasty

Medium

-1

-0.5

11/6/2001

0.0

0.5

Aggressiveness

CS 638, Fall 2001

Fuzzy Set Operations: Complement


Membership

The degree to which you


believe something is not in
the set is 1.0 minus the
degree to which you
believe it is in the set

FS

1.0

FS

0.0

Units

11/6/2001

CS 638, Fall 2001

Fuzzy Set Ops: Intersection (AND)


Membership
1.0

About 6

Tall

If you have x degree of faith in


statement A, and y degree of
faith in statement B, how
much faith do you have in the
statement A and B?
Eg: How much faith in that
person is about 6 high and tall

0.0

Height

11/6/2001

Does it make sense to attribute


more truth than you have in
one of A or B?

CS 638, Fall 2001

Fuzzy Set Ops: Union (OR)


Membership
1.0

About 6

Tall

If you have x degree of faith in


statement A, and y degree of
faith in statement B, how
much faith do you have in the
statement A or B?
Eg: How much faith in that
person is about 6 high or tall

0.0

Height

11/6/2001

Does it make sense to attribute


less truth than you have in one
of A or B?

CS 638, Fall 2001

Fuzzy Rules
If our distance to the car in front is small, and the distance is
decreasing slowly, then decelerate quite hard

Fuzzy variables in blue


Fuzzy sets in red
Conditions are on membership in fuzzy sets
Actions place an output variable (decelerate) in a fuzzy set (the quite
hard deceleration set)

We have a certain belief in the truth of the condition, and hence


a certain strength of desire for the outcome
Multiple rules may match to some degree, so we require a
means to arbitrate and choose a particular goal - defuzzification

11/6/2001

CS 638, Fall 2001

Fuzzy Rules Example (from Gems)


Rules for controlling a car:
Variables are distance to car in front and how fast it is changing, delta,
and acceleration to apply
Sets are:
Very small, small, perfect, big, very big - for distance
Shrinking fast, shrinking, stable, growing, growing fast for delta
Brake hard, slow down, none, speed up, floor it for acceleration

Rules for every combination of distance and delta sets, defining an


acceleration set

Assume we have a particular numerical value for distance and


delta, and we need to set a numerical value for acceleration
Extension: Allow fuzzy values for input variables (degree to which we
believe the value is correct)

11/6/2001

CS 638, Fall 2001

Set Definitions for Example


v. small

small perfect

big

v. big

brake

slow

distance
<<

<

>

>>

delta

11/6/2001

CS 638, Fall 2001

present

fast

fastest

acceleration

Instance for Example


v. small

small perfect

big

v. big

brake

slow

distance
<<

<

>

>>

????

fast

fastest

acceleration

Distance could be considered


small or perfect
Delta could be stable or growing
What acceleration?

delta

11/6/2001

present

CS 638, Fall 2001

Matching for Example


Relevant rules are:

If distance is small and delta is growing, maintain speed


If distance is small and delta is stable, slow down
If distance is perfect and delta is growing, speed up
If distance is perfect and delta is stable, maintain speed

For first rule, distance is small has 0.75 truth, and


delta is growing has 0.3 truth
So the truth of the and is 0.3

Other rule strengths are 0.6, 0.1 and 0.1


11/6/2001

CS 638, Fall 2001

Fuzzy Inference for Example


Convert our belief into action
For each rule, clip action fuzzy set by belief in rule
present

slow

acceleration
fast

11/6/2001

acceleration

CS 638, Fall 2001

present

acceleration

acceleration

Defuzzification Example
We have three things sets we have reason to believe we are
in, and each set covers a range of values
Two options in going from current state to a single value:
Mean of Max: Take the rule we believe most strongly, and take the
(weighted) average of its possible values
Center of Mass: Take all the rules we partially believe, and take their
weighted average

In this example, we slow down either way, but we slow


down more with Mean of Max
Mean of max is cheaper, but center of mass exploits more
information

11/6/2001

CS 638, Fall 2001

Evaluation of Fuzzy Logic


Does not necessarily lead to non-determinism
Advantages
Allows use of numbers while still writing crisp rules
Allows use of fuzzy concepts such as medium
Biggest impact is for control problems
Help avoid discontinuities in behavior
In example problem strict rules would give discontinuous acceleration

Disadvantages
Sometimes results are unexpected and hard to debug
Additional computational overhead
There are other ways to get continuous acceleration

11/6/2001

CS 638, Fall 2001

References
Nguyen, H. T. and Walker, E. A. A First Course in
Fuzzy Logic, CRC Press, 1999.
Rao, V. B. and Rao, H. Y. C++ Neural Networks
and Fuzzy Logic, IGD Books Worldwide, 1995.
McCuskey, M. Fuzzy Logic for Video Games, in
Game Programming Gems, Ed. Deloura, Charles
River Media, 2000, Section 3, pp. 319-329.

11/6/2001

CS 638, Fall 2001

Neural Networks
Inspired by natural decision making structures (real nervous
systems and brains)
If you connect lots of simple decision making pieces
together, they can make more complex decisions
Compose simple functions to produce complex functions

Neural networks:

Take multiple numeric input variables


Produce multiple numeric output values
Normally threshold outputs to turn them into discrete values
Map discrete values onto classes, and you have a classifier!
But, the only time Ive used them is as approximation functions

11/6/2001

CS 638, Fall 2001

Simulated Neuron - Perceptron


Inputs (aj) from other perceptrons with weights (Wi,j)
Learning occurs by adjusting the weights

Perceptron calculates weighted sum of inputs (ini)


Threshold function calculates output (ai)
Step function (if ini > t then ai = 1 else ai = 0)
Sigmoid g(a) = 1/1+e-x

Output becomes input for next layer of perceptron


aj

Wi,j

Wi,j aj = ini

ai

ai = g(ini)

11/6/2001

CS 638, Fall 2001

Network Structure
Single perceptron can represent AND, OR not XOR
Combinations of perceptron are more powerful

Perceptron are usually organized on layers


Input layer: takes external input
Hidden layer(s)
Output layer: external output

Feed-forward vs. recurrent


Feed-forward: outputs only connect to later layers
Learning is easier

Recurrent: outputs can connect to earlier layers or same layer


Internal state

11/6/2001

CS 638, Fall 2001

Neural network for Quake


Enemy

Four input perceptron

Dead
Sound

Low Health

One input for each condition

Four perceptron hidden layer


Fully connected

Five output perceptron


One output for each action
Choose action with highest output
Or, probabilistic action selection
Choose at random weighted by output
Attack

Wander
Retreat

11/6/2001

CS 638, Fall 2001

Spawn
Chase

Learning Neural Networks


Learning from examples
Examples consist of input and correct output

Learn if networks output doesnt match correct output


Adjust weights to reduce difference
Only change weights a small amount ()

Basic perceptron learning


Wi,j = Wi,j + (t-o)aj
If output is too high (t-o) is negative so Wi,j will be reduced
If output is too low (t-o) is positive so Wi,j will be increased
If aj is negative the opposite happens

11/6/2001

CS 638, Fall 2001

Neural Net Example


Single perceptron to represent OR
Two inputs
One output (1 if either inputs is 1)
Step function (if weighted sum > 0.5 output a 1)

Initial state (below) gives error on (1,0) input


Training occurs

0.1

Wj aj = 0.1
0
g(0.1) = 0

11/6/2001

0.6

CS 638, Fall 2001

Neural Net Example

Wj = Wj + (t-o)aj
W1 = 0.1 + 0.1(1-0)1 = 0.2
W2 = 0.6 + 0.1(1-0)0 = 0.6
After this step, try (0,1)1 example
No error, so no training
0

0.2

Wj aj = 0.6
1
g(0.6) = 0

0.6

11/6/2001

CS 638, Fall 2001

Neural Net Example


1

0.2

Wj aj = 0.2
0
g(0.2) = 0

Try (1,0)1 example 0 0.6


Still an error, so training occurs
W1 = 0.2 + 0.1(1-0)1 = 0.3
W2 = 0.6 + 0.1(1-0)0 = 0.6
And so on
What is a network that works for OR?
What about AND?
Why not XOR?

11/6/2001

CS 638, Fall 2001

Neural Networks Evaluation


Advantages
Handle errors well
Graceful degradation
Can learn novel solutions

Disadvantages

Neural networks are the second best way to do anything


Cant understand how or why the learned network works
Examples must match real problems
Need as many examples as possible
Learning takes lots of processing
Incremental so learning during play might be possible

11/6/2001

CS 638, Fall 2001

References

Mitchell: Machine Learning, McGraw Hill, 1997.


Russell and Norvig: Artificial Intelligence: A Modern Approach, Prentice Hall,
1995.
Hertz, Krogh & Palmer: Introduction to the theory of neural computation,
Addison-Wesley, 1991.
Cowan & Sharp: Neural nets and artificial intelligence, Daedalus 117:85-121,
1988.

11/6/2001

CS 638, Fall 2001

También podría gustarte