Abstract- A large number of iterations and oscillation are Since Hopfield introduced in 1982 [2] and 1984 [3], the
those of the major concern in solving the economic load Hopfield neural networks have been used in many different
dispatch problem using the Hopfield neural network. This applications. The important property of the Hopfield neural
paper develops two different methods, which are the slope network is the decrease in energy by finite amount whenever
adjustment and bias adjustment methods, in order to speed up
there is any change in inputs [ll]. Thus, the Hopfield neural
the convergence of the Hopfield neural network system.
Algorithms of economic load dispatch for piecewise quadratic network can be used for optimization. Tank and Hopfield
cost functions using the Hopfield neural network have been [4] described how several optimization problem can be
developed for the two approaches. The results are compared rapidly solved by highly interconnected networks of a simple
with those of a numerical approach and the traditional analog processor, which is an implementation of the
Hopfield neural network approach. To guarantee and for faster Hopfield neural network. Park and others [5] presented the
convergence, adaptive learning rates are also developed by economic load dispatch for piecewise quadratic cost
using energy functions and applied to the slope and bias functions using the Hopfield neural network. The results of
adjustment methods. The results of the traditional, fured this method were compared very well with those of the
learning rate, and adaptive learning rate methods are
numerical method in an hierarchical approach [13. King and
compared in economic load dispatch problem.
others [6] applied the Hopfield neural network in the
Key words- Economic load dispatch, Hopfield neural economic and environmental dispatching of electric power
networks, adaptive Hopfield neural networks. systems. These applications, however, involved a large
number of iterations and often shown oscillations during
I. INTRODUCTION transients. This suggests a need for improvement in
convergence through an adaptive approach, such as the
adaptive learning rate method developed by Ku and Lee 171
I n power system, the operation cost at each time needs to
be minimized via economic load dispatch (ELD).
Traditionally, the cost function of each generator has been
for a diagonal recurrent neural network.
."
-600 -400 -200 0 200 There is a limitation in the slope adjustment method, in
Ui that, slopes are small near the saturation region of the
sigmoidal function, Fig 1. If every input can use the same
Fig. 1.. Sigmoidal threshold function with different values of maximum possible slope, convergence will be much faster.
the gam parameter.
This can be achieved by changing the bias to shift the input
convergence, two adaptive adjustment methods are near the center of the sigmoidal function. The bias can be
developed in this paper: slope adjustment and bias changed following the similar gradient-descent method used
adjustment methods. in the slope adjustment method
V, SIMULATION RESULTS
"7
70
I
2001
1 2 3 4 5 6 7
Iterations x 10
Fig. 2. Cost of the using Hopfield network (dotted line), slope
adjustment method with fixed learning rate (dashed line), and bias
adjustment method with fixed learning rate (solid line).
I I I 1
4 I 232.5 I232.9 236.0 235.8
5 I 240.4 I241.7 I 257.9 I256.8
; h t f
1 10
Total P
I
I
232:;
232.5
320.2
238.9
2400.0
I
1
232.9
253.4
232.9
321.0
240.4
2400.0
I
7%:;
236.0
331.8
255.5
I 2500.0 1
I
g23:
235.8
334.0
254.4
2500.0
271.2
2600.0
I 271.2 1 287.3
I 2600.0 I 2700.0 I 2700.0
I 287.8
510' 3 5
Iterations
6 8
Fig. 3 : Cost for fixed leafning rate (solid line) and adaptive
leaming rate (dashed-dotted h e ) .
9 1 0 '
x 10
D. Momentum
1.OEO4
Momentum is applied to the input of each neuron to
Table 4 Results for the bias adjustment method with fixed speed up the convergence of the system. When the
leaming rate, = 1.0 (A) and adaptive learning rate (B). momentum is applied to the system, the number of iterations
l w I 2700 MW is drastically reduced. In Table 5, the momentum factor of
B I A I B 0.9 is applied and the number of iterations is reduced to
197.6 189.4 208.3 206.7 212.4 217.9 I 221.4 I 228.8
201.6
252.3
232.7
239.9
232.7
1
201.8
253.5
232.9
I 242.1
232.9
I
I
206.2
265.2
235.9
257.1
235.9
I
205.8
265.6
235.8
235.8
209.6
280.0
I
238.8
I 258.2 277.9I
238.6
210.5
278.8
239.0
275.8
239.0
i
I
I
213.8
293.3
242.1
295.4
242.0
I 214.1 I
I
292.0
242.2
293.6
242.1
I
I
about 10 percent of those of the conventional Hopfield neural
network, while lower momentum factors give slower
responses. When the momentum with the same momentum
251.5 253.8 268.3 269.4 288.1
factor is applied to the slope adjustment method with
I 232.7 I 232.9 I 235.8 I 235.8 I 238.8 239.0 242.1 242.1 adaptive learning rates, the number of iterations is reduced
I
I
I
318.8
240.3
2400.0
I
I
319.3
241.6
I 2400.01
I
I
330.9
256.4
2500.0
I
I
330.1
256.9
I
341.9
I
274.0
I 2500.0I 2600.0
342.1
272.3
2600.0
I 345.2
290.4
2700.0
I 352.3
290.1
2700.0
I further to be about 60 percent of that of the Hopfield neural
network with the input momentum. The number of
cost 574.37 I 626.32 I 626.27 I iterations, as seen in Table 6, can be reduced to about one
Iters 99,335
U0 100.0 third which is about 3 percent of the Hopfield network
50.0
theta
- 5.0
without momentum when the momentum factor for the gain
parameter is 0.97.
524
Table 5. Result with momentum, (~=0.9, applied to input for
conventional Hopfield network (A), and adaptive slope adjustment
method (B).
I 235.8
258.4
236.2
258.5
I
1
239.1
276.4
I
I
239.4
276.1
I
I
242.3
293.8
1
I
242.7
293.7
I 6 1 199.5 232.7 I 235.7 235.2 I 239.1 I 239.3 1 242.3 1 242.7 I :._
269.8 I 286.1 I 286.4 1 302.8 I 303.3
540
:->:--...
..............
236.2 I 239.1 I 239.4 I 242.4 I 242.7 530
\r
'. ............
520 I
2000 4000 6000 8000 1000 1200C
Iterations
U 2400 MW I 2500 MW
Numercal I Slope I Bias I Numerical I Slope I Bias
Method I I Method I I
1 193.2 191.5 189.0 206.6 205.7 206.7
2 204.1 203.0 201.7 206.5 207.5 205.8
550
540 ................
530
I
520
2000 4000 6000 8000 10000 12000
0
I
Itemtions
K. Y. Lee, A. Sode-Yome, and J. H. Park (Department of [l.] E. Aarts and J. Korst, Simulated Annealing and
Electrical Engineering, The Pennsylvania State University, Boltzmann Machines, John Wile
University Park, PA 16802, U.S.A.) : NY, 1989.
[2.] J. Hertz, A. Krogh, and R.
The authors appreciate the interest of the discussers and the Tkeory of Neu
their comments. Menlo Park, CA, 19