Está en la página 1de 4

Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 13-16 August 2006

A DIFFERENTIAL EVOLUTION WITH SIMULATED ANNEALING


UPDATING METHOD
JING-YU YAN, QING LING, DE-MIN SUN
Automation Department, University of Science and Technology of China, Hefei 230027, China
E-MAIL: jyyan@mail.ustc.edu.cn, qingling@mail.ustc.edu.cn, sundm@ustc.edu.cn

Abstract:
In this paper, we point out that conventional differential
evolution (CDE) algorithm runs the risk of being trapped by
local optima because of its greedy updating strategy and
intrinsic differential property. A novel simulated annealing
differential evolution (SADE) algorithm is proposed to
improve the premature property of CDE. With the aid of
simulated annealing updating strategy, SADE is able to escape
from the local optima, and achieve the balance between
exploration and exploitation. Optimization results on standard
test suits indicate that SADE outperforms CDE in the global
search ability.

Keywords:
Differential evolution; simulated annealing; function
optimization; Global search

1.

Introduction

Since the mid-fifties of the twenty century, several


heuristic global search techniques inspired by biologic
evolution have been suggested. Most of these evolutionary
algorithms (EAs), such as evolutionary programming (EP),
evolution strategy (ES), and genetic algorithms (GAs), have
been proved to be successful for numerical optimization
problems [1].
Among various EAs, differential evolution (DE),
which characterized by the different mutation operator and
competition strategy from the other EAs, has shown great
promise in many numerical benchmark problems and
real-world applications [2]. For example, DE has been
compared in several test suites with taboo search, simulated
annealing, simple evolutionary algorithm, particle swarm
optimization, and so on[2-7]. In all these tests, DE
outperforms other methods in most cases such as
multimodal problems and constrained problems. These
results indicate that DE has the advantage of fast
convergence rate and low computational consumption of
function evaluations.
Though DE performs well in many fields, its fast
convergence property usually leads the optimization

process to be trapped by a local optimum. This


disadvantage comes from the two aspects of DE: greedy
updating method and intrinsic differential property. The
greedy updating strategy results in premature of DE, while
the differential operation in maintains the premature status
of DE.
To improve the global optimization property of DE, a
novel simulated annealing differential evolution (SADE)
algorithm which uses the simulated annealing updating
strategy is proposed in this paper. Different from the
conventional differential evolution (CDE) algorithm which
uses the greedy updating strategy, SADE performs better in
preserving the diversity of individuals, and achieves the
balance between exploration and exploitation successfully.
This paper is arranged as follows. Section 2 describes
the basic updating mechanism of DE, and constructs a
deceptive function to illustrate the disadvantage of CDE. In
Section 3, the simulated annealing differential evolution
algorithm is proposed to improve the global search ability
of CDE. Numerical results of CDE and SADE in test
functions are compared in Section 4. Conclusions and
discussions are provided in Section 5.
Without loss of generality, we limit our investigation
to the minimization problems in this paper.
2.

Premature Property of Conventional DE

As mentioned in Ref. 2, the updating method of CDE


works as follows: if the new solution yields better objective
function value than its parent, replace the parent with the
new solution; otherwise, preserve the parent. The greedy
updating mechanism ensures the fast convergence rate of
CDE. But on the other hand, this scheme tends to eliminate
the currently inferior individuals though they would have
the potential to evolve to the global optimum. Therefore,
the diversity of the population can not be guaranteed and
the premature of CDE is inevitable.
Furthermore, intrinsic differential property promotes
the premature of CDE too. Let xi ,G , i = 1, 2,L NP denote

1-4244-0060-0/06/$20.00 2006 IEEE


2103

Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 13-16 August 2006
the solutions in generation G with population number NP.
In CDE, the child individual v is generated according to:
v = xr1,G + F ( xr 2,G xr 3,G )
(1)
Here r1 , r2 , r3 [1, NP ] are three mutually different
integers and F is a predetermined positive parameter.
Consider the situation where the optimization process
is trapped by a local optimum and all solutions are
concentrated together. Therefore, the difference between
xr 2,G and xr 3,G is not large enough. Then the new
solution v tends to reside near the parent xr1,G . This
phenomenon results that the diversity of population can not
be improved apparently from the recombination process. It
means that it is difficult for CDE to escape from the local
optimum.
To verify the analysis above, a DE deceptive function
is designed as:
3sin c(2 x + 10) if -10 x < 0
y=
if 0 x 10
x sin( x )

(2)

Where x is drawn form the interval [-10, 10], and the


function sinc(t) is given by:
if t = 0
1
sin c(t ) =
sin(
)
/(
)
if t 0
t
t

Figure 1. Landscape of DE deceptive function

(3)

The landscape of DE deceptive function is shown in


Fig. 1. The global optimum of the DE deceptive function is
x = 5 with objective value y = 3 . There is a deceptive
local minimum
with objective value
x = 8.5060
y = 2.9160 in the test function.
Run the conventional differential evolution algorithm
for 100 times with parameters: population number NP=10,
maximum generation MG=100, scaling factor F=0.5 and
crossover probability CR=1, the global minimum is found
for 61 times while the algorithm is trapped by the deceptive
solution for 39 times, as shown in Fig. 2. The optimization
results indicate that conventional differential evolution
algorithm suffers from the premature phenomenon greatly.
3.

Simulated Annealing Deferential Evolution

According to the discussion in Section 2, the


premature of CDE is caused by two factors: greedy
updating method and intrinsic differential property. Here we
will focus on improving the global search ability of
differential evolution by modifying the updating method.
In this paper, we propose a novel simulated annealing
differential evolution (SADE) algorithm where the greedy
updating method is replaced by the simulated annealing
updating method. The simulated annealing updating method
is able to protect the promising individuals and improve the
diversity of population.
There are two more parameters introduced by the
simulated annealing: the initial acceptance probability IAP
and the annealing speed AS. The procedure of SADE is
shown as follows:
1) Initialization: To initialize population and algorithm
parameters. Let current acceptance probability AP
equal to IAP, and current generation CG equal to 0.
2) Recombination: For each parent, generate a new
solution according to the recombination rule in Eq. (1).
Then execute the crossover operation with crossover
rate CR, as shown in Ref. 2.
3) Updating: For each pair of parent and child, if the
objective value of the child is better than that of the
parent, replace the parent by the child. Otherwise,
replace the parent by the child with probability AP.
4) Annealing: Update the acceptance probability AP with
the simulated annealing rule:
AP =

5)
Figure 2. Results of DE deceptive function

2104

IAP
log(10 + CG AS )

(4)

(5)
CG = CG + 1
Termination: Stop the optimization process when CG
equals to the predetermined maximum generation.
According to the updating mechanism of acceptance

Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 13-16 August 2006
probability, AP shrinks when generation number increases.
Therefore, in the beginning of optimization process, the
new-generated promising solutions have a large opportunity
to survive and the diversity of population is guaranteed. In
this stage, SADE is focusing on the exploration of fitness
landscape. On the other hand, at the end of optimization
process, the updating strategy evolves to a greedy property
and SADE is focusing on exploitation of fitness landscape.
It is the simulated annealing updating scheme that achieves
the balance between exploration and exploitation for
SADE.
4.

Optimization Results on Test Functions

In this Section, SADE algorithm is compared with


CDE in several standard test functions. Considering the
intrinsic differential property of DE, the test functions are
divided into two groups. In group 1, the global minimum of
each function is not in the center of search space, and in
group 2, the global minimum of each function is in the
center of search space.
Group 1 includes six test functions:
1) Modified first De Jong function (sphere) [2]:
3

f1 ( x) = x 2j

(6)

j =1

x1 [5 5.12], x2 [3 5.12], x3 [1 5.12] .

The global minimum is f1(0,0,0)=0. Considering the


accuracy of optimization, solutions with objective value
less than 0.1 are treated as global minimum.
2) Second De Jong function [2]:
(7)
f 2 ( x) = 100( x12 x2 )2 + (1 x1 )2
x j [ 2.048 2.048] .

2.20) = 1.8011 . We treat the solutions with the objective


value less than 1.79 as global minimum.
5) Ripple function [8]:
x 0.1 2
) ]
0.8
[sin 6 (5 x) + 0.1cos 2 (500 x)]
x [0 1]
y = exp[2(log 2) (

The global minimum is f5(0.1) = 1.1 . We treat the


solutions with the objective value less than 1.05 as
global minimum.
6) DE deceptive function, which is given in section 2.
To set the accuracy bound, we treat the solutions in the
interval [-4.95 -5.05] of variable space as global minimum.
Group 2 includes two test functions:
7) Rastrigin function [8]:
(11)
y = 20 + x12 10cos(2 x1 ) + x22 10cos(2 x2 )
x1 , x2 [5.12 5.12]
The global minimum of the test function is f7(0,0)=0.
The accuracy is the same as test function f1.
8) Bohachevsky F1 function [8]:
(12)
y = x12 + 2 x22 0.3cos(3 x1 ) 0.4 cos(4 x2 ) + 0.7
x1 , x2 [50 50]
The global minimum is f8(0,0)=0. We treat the
solutions with the objective value less than 0.05 as global
minimum.
Table 1 contains control parameters of the two
algorithms and the final result, where GFN stands for the
number of times which find global minimum in 100 times.
Table 1. The results of function optimization

The global minimum is f2(1,1)=0. The accuracy is the


same as test function f1.
3) Modified Ackley function [8]:
x12 x22
+ )
2
2
cos(2 x1 ) cos(2 x2 )
exp(
+
)
2
2
x1 [70,10], x2 [10, 70]
y = 20 + exp(1) 20 exp(0.2

f(x)

(8)

The test function is full with local minimum, and the


global minimum is f3(0,0)=0. The accuracy is the same as
test function f1.
4) Michalewicz function [8]:
20

x2
2x2
y = sin x1 sin 1 sin x2 sin 2
pi
pi

x1 , x2 [0 ]

(10)

Common Settings

CDE

SADE

NP

CR

MG

GFN

IAP

AS

GFN

10

0.5

100

73

10

91

10

0.5

100

69

10

92

10

0.7

80

86

1000

90

0.5

0.8

60

84

1000

86

10

0.4

0.6

60

74

100

82

10

0.7

0.8

80

69

0.8

100

79

10

0.4

0.6

50

56

1000

53

10

0.5

0.8

50

96

100

94

20

(9)

The global minimum of the test function is f4(1.57,

As shown in Table 1, SADE outperforms CDE in the


test functions in group 1. While in the test functions in
group 2, there is no evidence to show the difference
between the optimization results of SADE and CDE. The

2105

Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 13-16 August 2006
main reason of this phenomenon comes from the intrinsic
differential property of DE, which tends to generate new
solutions at the center of search space, and counteracts the
negative affect of greedy updating strategy.
5.

Conclusions

In this paper, premature phenomenon of conventional


differential evolution algorithm is analyzed and illustrated
with numerical results. There are two factors that cause the
premature of CDE: the greedy updating method and the
intrinsic differential property. A novel simulated annealing
differential evolution (SADE) algorithm is proposed to
improve the global search ability of CDE. With the aid of
simulated annealing updating method, SADE is able to
achieve the balance between exploration and exploitation.
Numerical results in several test functions prove that SADE
outperforms CDE in most cases, especially when the
optimal solutions of test functions are not in the center of
the search space.
Future directions of our research will focus on the
intrinsic differential property of SADE. Through adjusting
the distribution of the children in each generation, SADE is
expected to achieve better population diversity and global
search ability in the evolution.
References

[1] R.Storn, K.Price, Differential evolution: a simple and


efficient adaptive scheme for global optimization over
continuous spaces, Technique report, International
Computer Science Institute, Berkley, 1995
[2] K.Price, Differential evolution: a fast and simple
numerical optimizer, Biennial Conference of North
America Fuzzy Information Processing Society, pp.
524-527, 1996
[3] R.Storn, K.Price, Minimizing the real functions of
the ICEC96 contest by differential evolution,
International
Conference
on
Evolutionary
Computation, pp. 842-844, 1996
[4] K.Price, Differential evolution vs.the functions of the
2nd ICEC, International conference on evolutionary
computation, pp.153-157, 1997
[5] R.Storn, K.Price, Differential evolution: a simple and
efficient heuristic for global optimization over
continuous spaces, Journal of Global Optimization,
vol.11, pp. 341-359, 1997
[6] J.Vesterstrom, R.Thomsen, A comparative study of
differential evolution, particle swarm optimization,
and evolutionary algorithms on numerical benchmark
problems, Congress on Evolutionary Computation,
pp. 1980-1987, 2004
[7] Rene Thomsen, Multimodal optimization using
crowding-based differential evolution, Congress on
Evolutionary Computation, pp. 1382-1389, 2004

2106

También podría gustarte