Está en la página 1de 29

An improved

ant colony
optimization
155
Engineering Computations:
International Journal for Computer-
Aided Engineering and Software
Vol. 27 No. 1, 2010
pp. 155-182
#Emerald Group Publishing Limited
0264-4401
DOI 10.1108/02644401011008577
Received 6 October 2008
Revised 8 January 2009
Accepted 19 January 2009
An improved ant colony
optimization for constrained
engineering design problems
A. Kaveh
Iran University of Science and Technology, Tehran, Iran, and
S. Talatahari
Department of Civil Engineering, Tabriz University, Tabriz, Iran
Abstract
Purpose The computational drawbacks of existing numerical methods have forced researchers to
rely on heuristic algorithms. Heuristic methods are powerful in obtaining the solution of optimization
problems. Although they are approximate methods (i.e. their solution are good, but not provably
optimal), they do not require the derivatives of the objective function and constraints. Also, they use
probabilistic transition rules instead of deterministic rules. The purpose of this paper is to present an
improved ant colony optimization (IACO) for constrained engineering design problems.
Design/methodology/approach IACO has the capacity to handle continuous and discrete
problems by using sub-optimization mechanism(SOM). SOMis based on the principles of finite element
method working as a search-space updating technique. Also, SOM can reduce the size of pheromone
matrices, decision vectors and the number of evaluations. Though IACOdecreases pheromone updating
operations as well as optimization time, the probability of finding an optimum solution is not reduced.
Findings Utilizing SOMin the ACOalgorithmcauses a decrease in the size of the pheromone vectors,
size of the decision vector, size of the search space, the number of function evaluations, and finally the
required optimization time. SOM performs as a search-space-updating rule, and it can exchange
discrete-continuous search domain to each other.
Originality/value The suitability of using ACO for constrained engineering design problems is
presented, and applied to optimal design of different engineering problems.
Keywords Optimum design, Civil engineering, Programming and algorithm theory
Paper type Research paper
1. Introduction
A large number of algorithms based on numerical linear and nonlinear programming
methods have been developed to solve various engineering optimization problems in
recent decades. Although these numerical optimization methods provide a useful
strategy to obtain the global optimum (or near to it) for simple and ideal models, they
have some disadvantages to handle engineering problems (i.e. complex derivatives,
sensitivity to initial values, and the large amount of enumeration memory required).
Many real-world engineering optimization problems are highly complex in nature and
quite difficult to solve using these methods.
The computational drawbacks of existing numerical methods have forced
researchers to rely on heuristic algorithms (Lee and Geem, 2005). Heuristic methods are
quite suitable and powerful for obtaining the solution of optimization problems.
Although these are approximate methods (i.e. their solution are good, but not provably
optimal), they do not require the derivatives of the objective function and constraints.
Also, they use probabilistic transition rules instead of deterministic rules.
Ant colony optimization (ACO), being a relatively new heuristic approach, is a
cooperative search algorithm which combines rules and randomness imitating by the
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/0264-4401.htm
The first author is grateful to the Iran National Science Foundation for the support.
EC
27,1
156
foraging behavior of ant colonies. One basic idea of the ACO approach is to employ the
counterpart of the pheromone trail used by real ants as an indirect communication and
as a formof memory of previously found solutions.
Pervious studies have shown that ACO perform very well for solving small
problems, but in problems containing large search spaces, proficiency of ACO
decreases (Dorigo and Stutzle, 2004). In other words, the computational cost increases
rapidly as well as the largeness of the search space. Some attempts have been made to
parallelize ACO to overcome this problem (Bullnheimer et al., 1998; Middendorf et al.,
2002; Piriyakumar and Levi, 2002; Benkner et al., 2005). It is common to adopt the
island model approach to develop parallel ACO algorithms, in which the exchange of
information plays a major role. Solutions, pheromone matrices, and parameters have
been tested as the object of such an exchange. In Bullnheimer et al. (1998) solutions and
pheromone levels are exchanged, producing a rather high volume of communication,
which requires a significant part of the computational time. In Benkner et al. (2005) the
communication of the whole pheromone matrix leads to a decrease in solution quality
as well as high runtime. Although parallelized approaches work as auxiliary tools to
improve the optimization algorithms, it seems these cannot meet all our requirements
in solving optimization problems in a logical time.
On the other hand, the ACO algorithms generally target discrete optimization
problems (Dorigo and Stutzle, 2004), and have been applied to various discrete
engineering problems in recent years such as water distribution system optimization
(Maier et al., 2003), optimal design of open channels, optimal soil hydraulic parameters
(Abbaspour et al., 2001), ground water design optimization problems (Li and Chan,
2006), structural optimization problems (Kaveh and Shojaee, 2007; Kaveh and
Shahrouzi, 2008; Kaveh et al., 2008b; Kaveh and Talatahari, 2008; Kaveh and
Jahanshahi, 2008), among many others. However, there are few adaptations of ACO to
continuous space function optimization problems until now. One of the first attempts to
apply an ant-related algorithm to the continuous optimization problems was
continuous ACO (CACO) (Bilchev and Parmee, 1995). Although the authors of CACO
claim that they draw inspiration from the original ACO formulation, CACO employs
the notion of nest, which does not exist in the ACO approach. Also, CACO does not
perform an incremental construction of solutions, which is one of the main
characteristics of the ACO.
Another ant-related approach to continuous optimization is continuous interacting
ant colony (CIAC), (Dreo and Siarry, 2002). CIAC uses two types of communication
between ants: indirect communication (spots of pheromone deposited in the search
space) and direct communication. CIAC have many differences with the original
concepts of the ACO. There is direct communication between ants and no incremental
construction of solutions.
The third ant-based approach is ACO
R
introduced by Socha and Dorigo (2008).
Although ACO
R
tries to utilize all operators of the original ACO, there are some
differences. In each construction step, an ant chooses a value for variables using the
Gaussian kernel PDF composed of a number of regular Gaussian functions that does
not exist in the original ACO. In ACO
R
, the pheromone information is stored as a
solution archive, and pheromone update is accomplished by adding the set of newly
generated solutions to the solution archive and removing the same number of worst
solutions. This process is completely similar to the harmony memory operator in the
harmony search scheme (Lee and Geem, 2005), while in the original ACO, pheromone
matrix contains the information of all possible states that an ant can select and
An improved
ant colony
optimization
157
pheromone updating is done in a different manner. As CACO and CIAC, ACO
R
also
does not qualify as an extension of ACO. In addition, all these continuous ACO-based
algorithms only deal with unconstrained optimization problems.
This paper utilizes an improved ant colony optimization (IACO) for solving
engineering problems consisting of continuous or discrete search domains without
doing many changes in the original ACO framework. In order to fulfill this aim, the
sub-optimization mechanism (SOM) is utilized. The SOM, based on the principles of
finite element method, is used to reduce the size of pheromone matrices and decision
vectors. SOM not only performs as a search-space-updating rule, but also it is
capable to decrease number of evaluations as well as the pheromone updating
operations without the decrease in the probability of finding the optimum solution.
Using SOM, the exchanging between discrete and continuous search domain
(considering the required accuracy of the problem) becomes possible. There are two
approaches for using SOM if one is interested in performance: first, given a fixed
time to search, SOM can increase the quality of the solutions found in that time;
second, given a fixed solution quality, SOM can reduce the time needed to find a
solution not worse than that quality. Simulation results and comparisons based on
five constrained engineering design problems demonstrate the efficiency of the
proposed algorithm.
The rest of this paper is organized as follows: The problem formulation is given in
Section 2. Section 3 provides some basics for the ACO algorithm and its performance
for engineering design problems. In Section 4, the IACO is proposed and explained
in detail. Simulation results, based on some engineering design problems and
comparisons with previously reported results, are presented in Section 5, and the
discussion is provided in Section 6. Finally, the paper is ended with some concluding
remarks in Section 7.
2. Engineering optimization problems
2.1 Statement of the optimization design problem
Many engineering design problems can be formulated as constrained optimization
problems. Generally, a constrained optimization problemcan be described as follows:
findxto minimize f
cost
(x)
subject to:
g
j
(x) _ 0 j = 1. 2. . . . . n
g
h
k
(x) = 0 k = 1. 2. . . . . n
h
x
i.min
_ x
i
_ x
i.max
i = 1. 2. . . . . d
(1)
where x = [x
1
. x
2
. . . . . x
d
[
T
denotes the decision solution vector; f
cost
is a cost
function (objective function); x
i,min
and x
i,max
are the minimum and the maximum
permissible values for the ith variable, respectively; n
g
is the number of inequality
constraints and n
h
is the number of equality constraints. In common practice, equality
constraint h
k
({x}) = 0 can be replaced by an inequality constraint [h
k
(x)[ c _ 0,
where c is a small tolerant amount. Thus all constraints can be transformed into
inequality constraints (He and Wang, 2007).
EC
27,1
158
2.2 Constraint handling approach
The aim of constraint optimization is to search for feasible solutions with better
objective values. The vector {x} is a feasible solution if it satisfies all the constraints.
Due to the simplicity and ease of implementation, the penalty function method has
been considered as the most popular technique to handle the constraints. A penalty
function can be formulated as follows (Kaveh et al., 2008a):
f
fitness
(x) = f
cost
(x)

n
g
n
h
i=1

i
f
(i)
penalty
(x) (2)
where f
fitness
is the fitness function;
i
denotes the penalty factor; and f
penalty
(i)
denotes
the constraint violation for the ith constraint, which is employed as the penalty term.
Since the objective function and the constraint violations are simultaneously considered
in the penalty function, the performance of this kind of approach is significantly affected
by the penalty factor. However, suitable penalty factors are usually difficult to determine
and problem-dependent. In this paper, a feasibility-based rule introduced by Deb (2000) is
employed as a constraint-handling approach, as described in the following:
.
Any feasible solution is preferred to any infeasible solution.
.
Between two feasible solutions, the one having better objective function value is
preferred.
.
Between two infeasible solutions, the one having smaller sum of constraint
violation is preferred. This sumis calculated as
Viol =

n
g
j=1
max(0. g
j
(x))

n
h
j=1
max(0. [h
j
(x)[ c) (3)
Based on the above criteria, the objective function and the constraint violation
information are considered separately. Consequently, penalty factors are not used at all.
Moreover, in the first and the third cases, the search tends to the feasible region rather
than infeasible region, and in the second case, the search tends to the feasible region
with good solutions.
3. The ant colony algorithm implementation
3.1 ACO: general aspects
In 1992, Dorigo developed a paradigm known as ACO, a cooperative search technique
that mimics the foraging behavior of real life ant colonies (Dorigo, 1992; Dorigo et al.,
1996). The ant algorithms mimic the techniques employed by real ants to rapidly
establish the shortest route from food source to their nest and vice versa. Ants start
searching the area surrounding their nest in a randommanner. Ethologists observed that
ants can construct the shortest path from their colony to the feed source and back using
pheromone trails (Deneubourg and Goss, 1989; Goss et al., 1990), as shown in Figure 1(a).
When ants encounter an obstacle (Figure 1(b)), at first, there is an equal probability for all
ants to move right or left, but after a while (Figure 1(c)), the number of ants choosing the
shorter path increases because of the increase in the amount of the pheromone on that
path. With the increase in the number of ants and pheromone on the shorter path, all of
the ants will tend to choose and move along the shorter one, Figure 1(d).
An improved
ant colony
optimization
159
In fact, real ants use their pheromone trails as a medium for communication of
information among them. When an isolated ant comes across some food source in its
random sojourn, it deposits a quantity of pheromone on that location. Other randomly
moving ants in the neighborhood can detect this marked pheromone trail. Further, they
follow this trail with a very high degree of probability and simultaneously enhance the
trail by depositing their own pheromone. More and more ants follow the pheromone-rich
trail and the probability of the trail being followed by other ants is further enhanced by
the increased trail deposition. This is a positive feedback process which favors the path
along which more ants previously traversed. The ant algorithms are based on the
indirect communication capabilities of the ants. In ACO algorithms, virtual ants are
deputed to generate rules by using heuristic information or visibility and the principle of
indirect pheromone communication capabilities for iterative improvement of rules.
ACO was initially used to solve traveling salesman problem (TSP). The aim of TSP
is finding the shortest Hamiltonian graph, G = (N, E), where N denotes the set of
nodes, and E is the set of edges. The general procedure of the ACO algorithm manages
the scheduling of three steps (Dorigo and Caro, 1999):
Step 1: Initialization. The initialization of the ACO includes two parts: first consists
mainly the initialization of the pheromone trail. Second, a number of ants are
arbitrarily placed on the nodes chosen randomly. Then each of the distributed ants will
perform a tour on the graph by constructing a path according to the node transition
rule described next.
Step 2: Solution construction. (In the iteration) Each ant constructs a complete
solution to the problem according to a probabilistic state transition rule. The state
transition rule depends mainly on the state of the pheromone and visibility of ants.
Visibility is an additional ability used to make this method more efficient. For the path
between i to j, it is represented as j
ij
and in TSP, has reverse relation with the distance
between i to j. The node transition rule is probabilistic. For the kth ant on node i, the
selection of the next node j to followis according to the node transition probability:
P
ij
(t) =
[t
ij
(t)[
c
[j
ij
[
u

lN
k
i
[t
il
(t)[
c
[j
il
[
u
. \j N
k
i
(4)
Figure 1.
Ants find the shortest
path around an obstacle
EC
27,1
160
where t
ij
(t) is the intensity of pheromone laid on edge (i, j); N
i
k
is the list of neighboring
nodes fromnode i available to ant k at time t and c and u are control parameters.
Step 3: Pheromone updating rule. When every ant has constructed a solution, the
intensity of pheromone trails on each edge is updated by the pheromone updating rule
(global pheromone updating rule). The global pheromone updating rule is applied in
two phases. First, an evaporation phase where a fraction of the pheromone evaporates,
and then a reinforcement phase where the elitist ant which has the best solution among
others, deposits an amount of pheromone:
t
ij
(t d) = (1 ,) t
ij
(t) , t

ij
(5)
where , (0 < , < 1) represents the persistence of pheromone trails ((1 ,) is the
evaporation rate); d is the number of variables or movements an ant must take to
complete a tour; and t

ij
is the amount of pheromone increase for the elitist ant and
equals:
t

ij
=
1
L

(6)
where L

is the length of the solution found by the elitist ant.


At the end of each movement, local pheromone update reduces the level of
pheromone trail on paths selected by the ant colony during the preceding iteration.
When an ant travels to node j from node i, the local update rule adjusts the intensity of
pheromone on the path connecting these two nodes by
t
ij
(t 1) = t
ij
(t) (7)
where is an adjustable parameter between 0 and 1, representing the persistence of the
pheromone.
This process is iterated until a stopping criterion.
3.2 ACO: engineering design problems
In order to use the ACO method for design of engineering problems, the method
explained in the previous section must be modified. Since ACO is a discrete
optimization method, discrete values for each design variable (x
i
) should be defined and
any amount of this discrete value for each variable is considered as a virtual path for
the ants. In order to fulfill this goal, the permitted range and accuracy for the variables
are determined. Figure 2 is an illustration of the permissive paths for the variables.
Unlike TSP which had only one path between two nodes, in constrained optimization
problems the number of virtual paths between two nodes equals the number of
allowable values. The target of the optimal design for an engineering problem is to find
the best path among all paths of this graph.
Figure 2.
The virtual paths for
engineering design
problems
An improved
ant colony
optimization
161
The length of each path equals a permissive amount of a variable:
x
i.j
= x
i.min
(j 1)x
+
i
i = 1. 2. 3. 4
j = 1. 2. . . . . nm
i
_
(8)
where x
+
i
is the accuracy rate of the ith design variable; j is the number of virtual path
from 1 to nm
i
; nm
i
is the maximum number of virtual paths for the ith variable selected
considering the required accuracy for the solved problem.
The amount of visibility for each path is
j
ij
=
1
x
i.j
i = 1. 2. . . . . d
j = 1. 2. . . . . nm
i
_
(9)
For each variable, a vector called pheromone vector developed to record the amount of
pheromone trails upon each path is defined as T
i
(t) = [t
ij
(t)[
nm
i
. The initial amount of
pheromone for all paths can be written as
t
ij
(0) =
1
f
cost
(x
i.min
)
(10)
where f
cost
({x
i,min
}) is obtained by setting the minimum values for the variables in the
cost function.
The initial position of ants in starting of each cycle is expressed by the scatter
vector. In this vector, the subscript i is an integer and randomdigit between one and the
number of variables that shows the first location of each ant. Using Equation (4), the
transition vector [P
k
ij
(t)[
nm
i
determines the movements of ants at time t, and each ant
selects a path for the first location (variable) and then move to the next location. This
movement is done by considering the number of variables (Figure 2). It means that
when an ant is located in the ith variable, the next location is i 1, and when it is in the
last variable, the next location is the first variable. This process is continued until all
ants select a value for each variable.
Equations (5) and (7) are utilized for pheromone updating in design problems. Since
the shortest Hamiltonian graph in TSP is analogous with the minimum cost function in
the engineering design problems, in Equation (6), the minimum amount of cost
function in the kth iteration (min (f
k
cost
) may be used instead of the shortest length of the
graph as
t

ij
=
1
min(f
k
cost
)
(11)
3.3 ACO: parameter setting
ACO is parameterized by c, u, ,, and the number of ants. Parameters c and u
represent constants which control the relative contribution between the intensity of
pheromone laid on edge (i, j) reflecting the previous experiences of the ants about this
edge, and the value of visibility determined by a greedy heuristic for the original
problem. As u increases in value, the ants are more likely to choose the shorter paths. A
high level of visibility is a desirable property when solving a TSP; however, in
engineering design problems smaller values may produce infeasible solutions (Kaveh
EC
27,1
162
et al., 2008a). In this study, c is set to 1.0 but u is set to 0.4 which is smaller than that
used in the original Ant System (Dorigo and Stutzle, 2004).
The parameter , determines pheromone evaporation in global updating rule and
represents the persistence of the pheromone trail in local updating rule. Both
parameters have an influence on the exploratory behavior of ants. In order to
emphasize the exploration, one may decrease , or (so that relative differences
between pheromone trails increase slowly). When emphasizing exploration of the
search space in this way, success rates are improved; however, as a counterpart
running times are increased. However, in this study the values of , and are set small
( , = 0.2 and = 0.1) for reaching a good solution (Nourani et al., 2009).
In general, the best value for the number of ants is a function of the class of
problems being attacked, and most of the times it must be set experimentally (Dorigo
and Stutzle, 2004). For engineering design problems, the number of ants can be set to
20, because with smaller values, the success rates decreases and with greater values,
the number of the function evaluations as well as the running times increase while the
value of standard deviation does not decrease significantly (Kaveh et al., 2008a).
4. Improved ACO
In order to improve the ACO algorithm, the SOM is introduced. SOM is based on the
principles of finite element method, which is one of the major numerical solution
techniques. This method has been developed and applied to solve numerous engineering
problems in order to find their approximate solutions. The finite element method requires
division of the problem domain (Figure 3(a)) into many subdomains and each domain is
called a finite element. These element patches are considered instead of the main domain,
Figure 3(b). As the number of finite elements increases, the approximate solutions
obtained becomes nearer to the exact solutions, and vice versa if a small number of finite
elements is used then the amounts of calculations as well as the accuracy of solutions
decrease. In finite element method, for more investigation some special patches can be
divided into smaller sections, Figure 3(c). Similarly, SOM divides the search space into
subdomains and performs optimization process into these patches, and then based on
the resulted solutions the undesirable parts are deleted, and the remaining space is
divided into smaller parts for more investigation in the next stage. This process
continues until the remained space gets less than the required accuracy.
SOM can be considered as the repetition of the following steps for definite times, nc
(in the stage k of the repetition):
Step 1: Calculate permissible bounds for each variable. If x
i
(k1)
is the solution
obtained fromthe previous stage (k 1) for the ith variable, then
Figure 3.
The finite element method
An improved
ant colony
optimization
163
If x
(k1)
i
< (1 c
1
) x
(k1)
i.min
c
1
x
(k1)
i.max
=
x
(k)
i.min
= x
(k1)
i.min
x
(k)
i.max
= x
(k1)
i.min
2 c
1
(x
(k1)
i.max
x
(k1)
i.min
)
_
If x
(k1)
i
c
1
x
(k1)
i.min
(1 c
1
) x
(k1)
i.max
=
x
(k)
i.min
= x
(k1)
i.max
2 c
1
(x
(k1)
i.max
x
(k1)
i.min
)
x
(k)
i.max
= x
(k1)
i.max
_
Else
=
x
(k)
i.min
= x
(k1)
i
c
1
(x
(k1)
i.max
x
(k1)
i.min
)
x
(k)
i.max
= x
(k1)
i
c
1
(x
(k1)
i.max
x
(k1)
i.min
)
_
_

_
(12)
where i = 1, 2, . . . . d; k = 2, . . . , nc; c
1
is an adjustable factor which determines the
amount of the remaining search space; nc is the maximum number of repetitious stages
for the SOM; x
(k)
i.min
and x
(k)
i.max
are the minimumand the maximumallowable values for the
ith variable at the stage k, respectively. In stage 1, the amounts of x
(1)
i.min
and x
(1)
i.max
are set to
x
(1)
i.min
= x
i.min
. x
(1)
i.max
= x
i.max
. i = 1. 2. . . . . d (13)
Step 2: Determine the accuracy value for the variables. In each stage, the number of
permissible value for each variable is considered c
2
, and therefore the amount of the
accuracy rate of each variable equals
x
+(k)
i
=
(x
(k)
i.max
x
(k)
i.min
)
(c
2
1)
. i = 1. 2. . . . . d (14)
where x
+(k)
i
is the amount of increase in the ith variable; c
2
is the number of
subdomains considered instead of nm
i
and it has less value than nm
i
in the SOM.
Step 3: Create the series of the allowable values for the variables. The set of
allowable values for the variable i can be defined by using Equations (8) and (14) as
x
(k)
i.min
. x
(k)
i.min
x
+(k)
i
. . . . . x
(k)
i.min
(c
2
1) x
+(k)
i
= x
(k)
i.max
. i = 1. 2. . . . . d (15)
Step 4: Determine the optimum solution of the current stage. The last step is
performing an optimization process using the ACO algorithm when Equation (15) is
considered as permissive values for the variables.
SOM ends when the amount of accuracy rate of the last stage (i.e. x
i
* (nc)
) is less than
the amount of accuracy rate of the primary problem(i.e. x
i
*):
x
+(nc)
i
_ x
+
i
. i = 1. 2. . . . . d (16)
Another terminating criterion can be described as: SOM ends when the remaining
space in the stage nc (i.e. A
nc
i.max
A
nc
i.min
) is less than a search space domain with c
2
subdomains and x
+
i
accuracy rate:
EC
27,1
164
x
nc
i.max
x
nc
i.min
_ (c
2
1)x
+
i
. i = 1. 2. . . . . d (17)
In fact, Equations (16) and (17) present the same thing and it is possible to reach one by
using another. If Equations (14) and (12) are utilized in Equation (17), nc can be
obtained as follows:
(x
(nc)
i.max
x
(nc)
i.min
) _ (c
2
1)
(x
i.max
x
i.min
)
nm
i
1
(x
i.max
x
i.min
)(2c
1
)
nc1
_ (c
2
1)
(x
i.max
x
i.min
)
nm
i
1
(2c
1
)
nc1
(c
2
1)
_
1
nm
i
1
(18)
If nm
max
is the maximum value of all nm
i
, then the terminating criterion can be
obtained fromthe following relationship:
(2c
1
)
nc1
(c
2
1)
_
1
nm
max
1
(19)
SOM improves the search process with updating the search space from one stage to the
next stage. By applying this mechanism, the size of pheromone vectors and decision
vectors decrease from nm
i
to c
2
. The search space reduces from

d
i=1
nm
i
to c
2
d
nc.
As an example, if nm
i
= 10
4
, d = 4, c
2
= 30, and nc = 16, then

d
i=1
nm
i
= 10
16
while c
2
d
nc = 1.3 10
7
. This mechanism is named sub-optimization because in
each stage a complete optimization is carried out and a suboptimal solution is obtained
to be used in the next stages.
5. Simulation and analysis
Several well-studied engineering design problems taken from the optimization
literature are used to show the way in which the proposed approach works. These
examples have been previously solved using a variety of other techniques, which is
useful to show the validity and effectiveness of the proposed algorithm. For each
example, 30 independent runs are carried out using the IACO and the results are
compared to those of the other algorithms.
5.1 Simulation results for trapezoidal channels design problem
The trapezoidal channels with easily constructed geometry shapes are most often built
in practice. This kind of channel section was used by some other researchers for
optimal channel design previously such as: Das (2000), using the Lagrangian
multipliers method, Jain et al. (2004) utilizing a genetic algorithm and Nourani et al.
(2009) applying the ACO. Figure 4 shows the geometry of a trapezoidal channel cross
section with different Manning roughness coefficient values n
1
, n
2
, and n
3
at two sides
and bed of channel. The slopes of side faces having Manning roughness coefficient
values n
1
and n
2
are z
1
and z
2
, respectively. The design variables are: the bed width
b(=x
1
), the flow depth h(=x
2
), z
1
(=x
3
), and z
2
(=x
4
). T
f
, T
w
, and f are the top width of the
channel cross section, the top width of flow, and the freeboard, respectively.
An improved
ant colony
optimization
165
The cost function (f
cost
), the total construction cost per unit length of channel,
includes excavation of cross-sectional area and lining of side slopes and bed costs. This
function can be written as follows:
f
cos t
(x) = c
1
A
t
c
2
P
1
c
3
P
2
c
4
P
3
(20)
where A
t
is the total area and P
1
, P
2
, and P
3
are the perimeters of side slopes and bed of
the channel cross section including freeboard. c
1
is the total cross-sectional area cost
per unit area and c
2
, c
3
, and c
4
are lining costs per unit length of the perimeters of two
sides having slopes z
1
and z
2
and perimeter of bed, respectively.
The Mannings equation for uniform flow, as expressed by Das (2000), is the
equality constraint (Model I):
Qn
e

S
0
_
A
5,3
w
P
2,3
w
= 0 (21)
where Q is the design discharge; S
0
is the channels longitudinal bed slope; A
w
is the
wetted flow area; P
w
is the wetted perimeter; and n
e
is the equivalent roughness that
can be expressed as follows ( Jain et al., 2004):
n
e
=
((1 z
2
1
)
1,2
n
3,2
1
(1 z
2
2
)
1,2
n
3,2
2
)h bn
3,2
3
P
w
_ _
2,3
(22)
In the second model (Model II), as Jain et al. (2004) assumed, the cross section of a
trapezoidal channel may be divided into three segments, two triangular and one
rectangular segments (Figure 5).
Each part has different mean velocity and the sum of segmental discharges is equal
to the total discharge. Therefore, the constraint for this model can be expressed as
Figure 4.
A trapezoidal channel
geometry (Models I)
Figure 5.
A trapezoidal channel
cross section divided into
three segments (Models II)
EC
27,1
166
follows ( Jain et al., 2004):
Q

S
0
_
A
5,3
w
1
n
1

1 z
2
1
_
h
_ _
2,3

A
5,3
w
2
n
2

1 z
2
2
_
h
_ _
2,3

A
5,3
w
3
n
3
b
2,3
= 0 (23)
where, A
w
1
and A
w
2
are the wetted areas of triangular segments and A
w
3
is the wetted
area of the rectangular segment.
The constant parameters in this study have been adopted as used by Das (2000),
Jain et al. (2004), and Nourani et al. (2009). Table I summarizes the amounts of the
parameters.
Table II illustrates the best solution vectors and the corresponding cost function for
the two considered cases.
5.2 Simulation results for a tension/compression string design problem
This problem is described by Belegundu (1982) and Arora (1989). It consists of
minimizing the weight of a tension/compression string subject to constraints on shear
stress, surge frequency, and minimum deflection as shown in Figure 6.
The design variables are the mean coil diameter D(=x
1
); the wire diameter d(=x
2
);
and the number of active coils N(=x
3
). The problemcan be stated as
Cost function:
f
cost
(x) = (x
3
2)x
2
x
2
1
(24)
Table I.
Modeling parameters of
the trapezoidal channel
Flow factors Manning coefficients Cost function parameters
Q(m
3
/s) f(m) S
0
n
1
n
2
n
3
c
1
c
2
c
3
c
4
100 0.5 0.0016 0.018 0.020 0.015 0.60 0.25 0.20 0.30
Table II.
Optimum results for
trapezoidal channels
design
Optimal design variables
Methods x
1
(b) x
2
(h) x
3
(z
1
) x
4
(z
2
) f
cost
Model
Das (2000) 5.826 4.052 0.247 0.265 22.958 I
Jain et al. (2004) 5.433 4.211 0.272 0.296 22.973 I
Nourani et al. (2009) 5.845 4.045 0.244 0.265 22.961 I
Present work 5.874 4.047 0.237 0.256 22.958 I
Jain et al. (2004) 3.539 4.037 0.244 0.131 15.089 II
Nourani et al. (2009) 3.690 3.959 0.116 0.198 14.888 II
Present work 3.789 3.919 0.1272 0.122 14.646 II
Figure 6.
Tension/compression
string
An improved
ant colony
optimization
167
Constraint functions:
g
1
(x) = 1
x
3
2
x
3
71.785x
4
1
_ 0
g
2
(x) =
4x
2
2
x
1
x
2
12.566(x
2
x
3
1
x
4
1
)

1
5.108x
2
1
1 _ 0
g
3
(x) = 1
140.45x
1
x
2
2
x
3
_ 0
g
4
(x) =
x
1
x
2
1.5
1 _ 0
(25)
Variable regions:
0.05 _ x
1
_ 2
0.25 _ x
2
_ 1.3
2 _ x
3
_ 15
x
+
i
= 1E 4. i = 1. 2. 3
(26)
This problem has been solved by Belegundu (1982) using eight different mathematical
optimization techniques (only the best results are shown). Arora (1989) also solved this
problem using a numerical optimization technique called a constraint correction at the
constant cost. Coello (2000) as well as Coello and Montes (2002) solved this problemusing
GA-based method. Additionally, Qie and Wang (2007) utilized a co-evolutionary particle
swarm optimization (CPSO). Recently, Montes and Coello (2008) used various evolution
strategies to solve this problem. Table III presents the best solution of this problem
obtained using the IACO algorithm and compares the IACO results with solutions
reported byother researchers.
Table IVshows the statistical simulation results.
FromTable III, it can be seen that the best feasible solution obtained by IACOis better
than those previously reported. In addition, as shown in Table IV, the average searching
quality of IACO is superior to those of other methods. Moreover, the standard deviation
of the results by IACOin 30 independent runs for this problemis the smallest.
5.3 Simulation results for welded beam design problem
The welded beam structure, shown in Figure 7, is a practical design problem that has
been often used as a benchmark problem for testing different optimization methods (He
Table III.
Optimum results for the
tension/compression
string design
Optimal design variables
Methods x
1
(d) x
2
(D) x
3
(P) f
cost
Belegundu (1982) 0.050000 0.315900 14.250000 0.0128334
Arora (1989) 0.053396 0.399180 9.185400 0.0127303
Coello (2000) 0.051480 0.351661 11.632201 0.0127048
Coello & Montes (2002) 0.051989 0.363965 10.890522 0.0126810
Qie & Wang (2007) 0.051728 0.357644 11.244543 0.0126747
Montes & Coello (2008) 0.051643 0.355360 11.397926 0.012698
Present work 0.051865 0.361500 11.000000 0.0126432
EC
27,1
168
and Wang, 2007; Coello, 2000; Coello and Montes, 2002; Montes and Coello, 2008;
Ragsdell and Phillips, 1976; Deb, 1991). The objective is to find the minimum
fabricating cost of the welded beam subject to constraints on shear stress (t), bending
stress (o), buckling load (P
c
), end deflection (c), and side constraint. There are four
design variables, namely h(=x
1
), l(=x
2
), t(=x
3
), and b(=x
4
).
The mathematical formulation of the cost function f
cost
({x}), which is the total fabricating
cost mainlycomprised of the set-up, welding labor, and material costs, is as follows:
Cost function:
f
cost
(x) = 1.10471x
2
1
x
2
0.04811x
3
x
4
(14.0 x
2
) (27)
Constraint functions:
g
1
(x) = t(x) t
max
_ 0
g
2
(x) = o(x) o
max
_ 0
g
3
(x) = x
1
x
4
_ 0
g
4
(x) = 0.10471x
2
1
0.04811x
3
x
4
(14.0 x
2
) 5.0 _ 0
g
5
(x) = 0.125 x
1
_ 0
g
6
(x) = c(x) c
max
_ 0
g
7
(x) = P P
c
(x) _ 0 (28)
Figure 7.
Welded beam structure
Table IV.
Statistical results of
different methods for
the tension/compression
string
Methods Best Mean Worst Std Dev
Belegundu (1982) 0.0128334 N/A N/A N/A
Arora (1989) 0.0127303 N/A N/A N/A
Coello (2000) 0.0127048 0.012769 0.012822 3.9390e-5
Coello & Montes (2002) 0.0126810 0.0127420 0.012973 5.9000e-5
Qie & Wang (2007) 0.0126747 0.012730 0.012924 5.1985e-5
Montes & Coello (2008) 0.012698 0.013461 0.16485 9.6600e-4
Present work 0.0126432 0.012720 0.012884 3.4888e-5
An improved
ant colony
optimization
169
where
t(x) =

(t
/
)
2
2t
/
t
//
x
2
2R
(t
//
)
2
_
t
/
=
P

2
_
x
1
x
2
. t
/
=
MR
J
M = P L
x
2
2
_ _
. R =

x
2
2
4

x
1
x
3
2
_ _
2

J = 2

2
_
x
1
x
2
x
2
2
12

x
1
x
3
2
_ _
2
_ _ _ _
o(x) =
6PL
x
4
x
2
3
. c(x) =
4PL
3
Ex
3
3
x
4
P
c
(x) =
4.013E

x
2
3
x
6
4
36
_
L
2
1
x
3
2L

E
4G
_
_ _
L = 14in. P = 6.000lb
E = 30 10
6
psi. G = 12 10
6
psi
Variable regions:
0.1 _ x
1
_ 2
0.1 _ x
2
_ 10
0.1 _ x
3
_ 10
0.1 _ x
4
_ 2
x
+
i
= 1E 4 i = 1. 2. 3. 4
(29)
Deb (1991), Coello (2000), and Coello and Montes (2002) solved this problem using GA-
based methods. Radgsdell and Phillips (1976) compared optimal results of different
optimization methods that were mainly based on mathematical optimization
algorithms. These methods are APPROX (Griffith and Stewarts successive linear
approximation), DAVID (Davidon-Fletcher-Powell with a penalty function), SIMPLEX
(Simplex method with a penalty function), and RANDOM (Richardsons random
method) algorithms. Also, Qie and Wang (2007) using CPSO, and Montes and Coello
(2008) using evolution strategies solved this problem. The comparison of results are
shown in Table V. The IACO result, which was obtained after approximately 17,600
searches, was better than those reported by Qie and Wang (2007) who got the best
results between others.
The statistical simulation results are summarized in Table VI. From Table VI,
it can be seen that the worst solution found by IACO is better than the best solution
found by Ragsdell and Phillips (1976) and the best solution found by Deb (1991).
In addition, the standard deviation of the results by IACO in 30 independent runs is
very small.
EC
27,1
170
5.4 Simulation results for a pressure vessel design problem
A cylindrical vessel is capped at both ends by hemispherical heads as shown in
Figure 8. The objective is to minimize the total cost, including the cost of material,
forming, and welding (Kannan and Kramer, 1994):
f
cost
(x) = 0.6224x
1
x
3
x
4
1.7781x
2
x
2
3
3.1661x
2
1
x
4
19.84x
2
1
x
3
(30)
where x
1
is the thickness of the shell (T
s
), x
2
is the thickness of the head (T
h
), x
3
is the
inner radius (R), and x
4
is the length of cylindrical section of the vessel, not including
the head (L). T
s
and T
h
are integer multiples of 0.0625 inch, the available thickness of
rolled steel plates, and R and L are continuous.
Table V.
Optimum results for the
welded beam design
Optimal design variables
Methods x
1
(h) x
2
(l) x
3
(t) x
4
(b) f
cost
Regsdell & Phillips (1976)
APPROX 0.2444 6.2189 8.2915 0.2444 2.3815
DAVID 0.2434 6.2552 8.2915 0.2444 2.3841
SIMPLEX 0.2792 5.6256 7.7512 0.2796 2.5307
RANDOM 0.4575 4.7313 5.0853 0.6600 4.1185
Deb (1991) 0.248900 6.173000 8.178900 0.253300 2.433116
Coello (2000) 0.208800 3.420500 8.997500 0.210000 1.748309
Coello & Montes (2002) 0.205986 3.471328 9.020224 0.206480 1.728226
Qie & Wang (2007) 0.202369 3.544214 9.048210 0.205723 1.728024
Montes & Coello (2008) 0.199742 3.612060 9.037500 0.206082 1.737300
Present work 0.205700 3.471131 9.036683 0.205731 1.724918
Table VI.
Statistical results of
different methods for the
welded beam design
Methods Best Mean Worst Std Dev
Regsdell & Phillips (1976) 2.3815 N/A N/A N/A
Deb (1991) 2.433116 N/A N/A N/A
Coello (2000) 1.748309 1.771973 1.785835 0.011220
Coello & Montes (2002) 1.728226 1.792654 1.993408 0.074713
Qie & Wang (2007) 1.728024 1.748831 1.782143 0.012926
Montes & Coello (2008) 1.737300 1.813290 1.994651 0.070500
Present work 1.724918 1.729752 1.775961 0.009200
Figure 8.
Schematic of pressure
vessel
An improved
ant colony
optimization
171
The constraint functions can be stated as follows:
Constraint functions:
g
1
(x) = x
1
0.0193x
3
_ 0
g
2
(x) = x
2
0.00954x
3
_ 0
g
3
(x) = x
2
3
x
4

4
3
x
3
3
1.296.000 _ 0
g
4
(x) = x
4
240 _ 0
(31)
Variable regions:
0 _ x
1
_ 99
0 _ x
2
_ 99
10 _ x
3
_ 200
10 _ x
4
_ 200
i = 1. 2. x
+
i
= 0.0625
x
+
j
= 1e 4. j = 3. 4
(32)
The approaches applied to this problem include genetic adaptive search (Deb and
Gene, 1997), an augmented Lagrangian multiplier approach (Kannan and Kramer,
1994), a branch and bound technique (Sandgren, 1988), a GA-based co-evolution model
(Coello, 2000), a feasibility-based tournament selection scheme (Coello and Montes,
2002), a co-evolutionary particle swarm optimization (Qie and Wang, 2007), and an
evolution strategy (Montes and Coello, 2008). The best solutions obtained by the above
mentioned approaches are listed in Table VII, and their statistical simulation results
are shown in Table VIII.
From Table VII, it can be seen that the best solution found by IACO is better than
the best solutions found by other techniques. From Table VIII it can be seen that the
average searching quality of IACO is better than those of other methods, and even the
worst solution found by IACO is better than the best solutions found by Kannan and
Kramer (1994), Sandgren (1988), Coello (2000), and Deb and Gene (1997).
Table VII.
Optimum results for the
pressure vessel
Optimal design variables
Methods x
1
(T
s
) x
2
(T
h
) x
3
(R) x
4
(L) f
cost
Sandgren (1988) 1.125000 0.625000 47.700000 117.701000 8,129.1036
Kannan & Kramer (1994) 1.125000 0.625000 58.291000 43.690000 7,198.0428
Deb (1997) 0.937500 0.500000 48.329000 112.679000 6,410.3811
Coello (2000) 0.812500 0.437500 40.323900 200.000000 6,288.7445
Coello & Montes (2002) 0.812500 0.437500 42.097398 176.654050 6,059.9463
Qie & Wang (2007) 0.812500 0.437500 42.091266 176.746500 6,061.0777
Montes & Coello (2008) 0.812500 0.437500 42.098087 176.640518 6,059.7456
Present work 0.812500 0.437500 42.098353 176.637751 6059.7258
EC
27,1
172
5.5 Simulation results for a ten-bar truss design problem
The ten-bar truss problem has become a common problem in the field of structural
design to test and verify the efficiency of many different optimization methods and was
previously analyzed using various methods by Camp et al. (1998) and Camp and Bichon
(2004), Farshi and Schmit (1974), Schmit and Miura (1976), Venkayya (1971), Gellatly
and Berke (1971), Dobbs and Nelson (1976), Rizzi (1976), Khan et al. (1976), Rajeev and
Krishnamoorthy (1992), Li et al. (2007), Kaveh and Talatahari (2008), among many
others. Figure 9 shows the geometry and support conditions for this cantilevered truss
with loading condition. The material density is 0.1 lb/in.
3
(2,767.990 kg/m
3
) and the
modulus of elasticity is 10,000 ksi (68,950 MPa). The members are subjected to the
stress limits of 25 ksi (172.375 MPa) and all nodes in both vertical and horizontal
directions are subjected to the displacement limits of 2.0 in. (5.08 cm). There are 10
design variables in this example and a set of pseudo discrete variables ranging from0.1
to 35.0 in.
2
(from0.6452 to 225.806 cm
2
).
Figure 10 compares the best and the worst convergence of IACO in 30 runs. The
standard deviation of IACO is 6.76 lb while standard deviation of primary ACO has
been reported 27.22 lb (Camp and Bichon, 2004). Table IX compares the obtained
results in this research with outcomes of other researches.
Table VIII.
Statistical results of
different methods for the
pressure vessel
Methods Best Mean Worst Std Dev
Sandgren (1988) 8,129.1036 N/A N/A N/A
Kannan & Kramer (1994) 7,198.0428 N/A N/A N/A
Deb (1997) 6,410.3811 N/A N/A N/A
Coello (2000) 6,288.7445 6,293.8432 6,308.1497 7.4133
Coello & Montes (2002) 6,059.9463 6,177.2533 6,469.3220 130.9297
Qie & Wang (2007) 6,061.0777 6,147.1332 6,363.8041 86.4545
Montes & Coello (2008) 6,059.7456 6,850.0049 7,332.8798 426.0000
Present work 6,059.7258 6,081.7812 6,150.1289 67.2418
Figure 9.
A ten-bar truss
An improved
ant colony
optimization
173
6. Discussion
6.1 Reliability of IACO
When the SOM is used, in each stage some parts of the search space is deleted, and
therefore it is possible that the domain containing the optimum solution (favorite
domain) is eliminated. If solution vectors of all ants are out of the favorite domain, this
subdomain is eliminated, vice versa if only one ant goes to the optimum domain, the
favorite domain will be remained and the search process can be continued in the next
stage. The following calculations determine the amount of probability that an ant fall
in the optimum domain. In the stage k 1, the remaining space is (x
k1
i.max
x
k1
i.min
)
d
which is assumed the favorite domain respecting size and in that stage, the probability
that an ant during one function evaluation falls within the favorite space, considering a
randomsearch as a search tool, is equal to
P =
x
k
i.max
x
k
i.min
x
k1
i.max
x
k1
i.min
_ _
d
= (2c
1
)
d
(33)
Therefore the probability that an ant during one function evaluation falls within the
favorite space using IACO, as a search tool, is equal to
P
ACO
=
x
k
i.max
x
k
i.min
x
k1
i.max
x
k1
i.min
_ _
d
= ` (2c
1
)
d
(34)
where ` _ 1 may be chosen considering the capacities of the search tool.
P
ACO
= 1 ` (2c
1
)
d
is the probability that an ant during one function evaluation falls
out of the favorite space in the kth stage. If nf is the number of function evaluations
during one stage of SOM process, the probability that every ant falls out of the favorite
space in one stage is equal to
P
ACO
= (1 ` (2c
1
)
d
)
nf
(35)
Figure 10.
Comparison of the best
and the worst
convergence for ten-bar
truss
EC
27,1
174
Table IX.
Optimum
results for the
10-bar planner
truss
O
p
t
i
m
a
l
d
e
s
i
g
n
v
a
r
i
a
b
l
e
s
(
i
n
.
2
)
S
c
h
i
m
i
t
&
M
i
u
r
a
(
1
9
7
6
)
E
l
e
m
e
n
t
g
r
o
u
p
C
a
m
p
e
t
a
l
.
(
1
9
9
8
)
S
c
h
i
m
i
t
&
F
a
r
s
h
i
(
1
9
7
4
)
N
E
W
S
U
M
T
C
O
N
M
I
N
V
e
n
k
a
y
y
a
(
1
9
7
1
)
G
e
l
l
a
t
y
&
B
e
r
k
e
(
1
9
7
1
)
D
o
b
b
s
&
N
e
l
s
o
n
(
1
9
7
6
)
R
i
z
z
i
(
1
9
7
6
)
1
A
1
2
8
.
9
2
3
3
.
4
3
3
0
.
6
7
3
0
.
5
7
3
0
.
4
2
3
1
.
3
5
3
0
.
5
0
3
0
.
7
3
2
A
2
0
.
1
0
0
.
1
0
0
0
.
1
0
0
0
.
3
6
9
0
.
1
2
8
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
3
A
3
2
4
.
0
7
2
4
.
2
6
2
3
.
7
6
2
3
.
9
7
2
3
.
4
1
2
0
.
0
3
2
3
.
2
9
2
3
.
9
3
4
A
4
1
3
.
9
6
1
4
.
2
6
1
4
.
5
9
1
4
.
7
3
1
4
.
9
1
1
5
.
6
0
1
5
.
4
3
1
4
.
7
3
5
A
5
0
.
1
0
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
0
.
1
0
1
0
.
1
4
0
0
.
1
0
0
0
.
1
0
0
6
A
6
0
.
5
6
0
.
1
0
0
0
.
1
0
0
0
.
3
6
4
0
.
1
0
1
0
.
2
4
0
0
.
2
1
0
0
.
1
0
0
7
A
7
7
.
6
9
8
.
3
8
8
8
.
5
7
8
8
.
5
4
7
8
.
6
9
6
8
.
3
5
0
7
.
6
4
9
8
.
5
4
2
8
A
8
2
1
.
9
5
2
0
.
7
4
2
1
.
0
7
2
1
.
1
1
2
1
.
0
8
2
2
.
2
1
2
0
.
9
8
2
0
.
9
5
9
A
9
2
2
.
0
9
1
9
.
6
9
2
0
.
9
6
2
0
.
7
7
2
1
.
0
8
2
2
.
0
6
2
1
.
8
2
2
1
.
8
4
1
0
A
1
0
0
.
1
0
0
.
1
0
0
0
.
1
0
0
0
.
3
2
0
0
.
1
8
6
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
W
e
i
g
h
t
(
l
b
)
5
,
0
7
6
.
3
1
5
,
0
8
9
.
0
5
,
0
7
6
.
9
5
,
1
0
7
.
3
5
,
0
8
4
.
9
5
,
1
1
2
.
0
5
,
0
8
0
.
0
5
,
0
7
6
.
6
6
K
a
h
a
n
&
W
i
l
l
e
r
t
(
1
9
7
9
)
R
a
j
e
e
v
&
K
r
i
s
h
n
a
m
o
o
r
t
y
(
1
9
9
2
)
L
i
e
t
a
l
.
(
2
0
0
7
)
K
a
v
e
h
&
T
a
l
a
t
a
h
a
r
i
(
2
0
0
8
)
P
r
e
s
e
n
t
w
o
r
k
O
P
T
D
Y
N
C
O
N
M
I
N
P
S
O
P
H
P
S
O
i
n
.
2
c
m
2
1
A
1
3
0
.
9
8
2
5
.
7
0
2
5
.
2
0
3
0
.
5
6
9
3
0
.
7
0
4
3
0
.
0
6
8
3
0
.
4
9
3
1
9
6
.
7
2
8
2
A
2
0
.
1
0
0
0
.
1
0
1
.
8
9
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
0
.
6
4
5
3
A
3
2
4
.
1
7
2
5
.
1
1
2
4
.
8
7
2
2
.
9
7
4
2
3
.
1
6
7
2
3
.
2
0
7
2
3
.
2
3
0
1
4
9
.
8
7
1
4
A
4
1
4
.
8
1
1
9
.
3
9
1
5
.
8
3
1
5
.
1
4
8
1
5
.
1
8
3
1
5
.
1
6
8
1
5
.
3
4
6
9
9
.
0
0
6
5
A
5
0
.
1
0
0
0
.
1
0
0
.
1
0
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
0
.
6
4
5
6
A
6
0
.
4
0
6
0
.
1
0
1
.
7
5
0
.
5
4
7
0
.
5
5
1
0
.
5
3
6
0
.
5
3
8
3
.
4
7
1
7
A
7
7
.
5
4
7
1
5
.
4
0
1
6
.
7
6
7
.
4
9
3
7
.
4
6
0
7
.
4
6
2
7
.
4
5
1
4
8
.
0
6
8
8
A
8
2
1
.
0
5
2
0
.
3
2
1
9
.
7
3
2
1
.
1
5
9
2
0
.
9
7
8
2
1
.
2
2
8
2
0
.
9
9
0
1
3
5
.
4
1
9
9
A
9
2
0
.
9
4
2
0
.
7
4
2
0
.
9
8
2
1
.
5
5
6
2
1
.
5
0
8
2
1
.
6
3
0
2
1
.
4
5
8
1
3
8
.
4
3
8
1
0
A
1
0
0
.
1
0
0
1
.
1
4
2
.
5
1
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
0
.
1
0
0
0
.
6
4
5
W
e
i
g
h
t
(
l
b
)
5
,
0
6
6
.
9
8
5
,
4
7
0
.
5
1
5
,
5
6
0
.
7
2
5
,
0
6
1
.
0
0
5
,
0
6
0
.
9
2
5
,
0
5
7
.
3
6
5
,
0
5
8
.
4
3
2
2
,
5
0
0
N
An improved
ant colony
optimization
175
As a result, the probability that at least one ant during one stage falls within the
favorite domain can be obtained as:
P
ACO
= 1 (1 ` (2c
1
)
d
)
nf
(36)
Finally, the probability that at least one ant falls within the favorite subdomain in the
last stage of SOMis as follows:
P
ACO
= (1 (1 ` (2c
1
)
d
)
nf
)
nc1
(37)
In the above equation, all three SOM parameters, namely c
1
, c
2
, and nc, are considered
significant. The effect of c
1
and nc on P
ACO
is observable; however, c
2
affects on P
ACO
through nf. The values of nf for the welded beam problem corresponding to various c
2
is shown in Figure 11.
As it can be shown from Figure 11, there is an approximate linear relationship
between nf and c
2
:
nf w
1
c
2
w
2
(38)
where w
1
and w
2
have constant values. These values of w
1
and w
2
for the welded beam
example are 28.982 and 94.651, respectively. Substituting Equation (38) into Equation
(37), the probability function is modified as follows:
P
ACO
= (1 (1 ` (2c
1
)
d
)
(w
1
c
2
w
2
)
)
nc1
(39)
Considering constant values for c
1
and c
2
for the welded beam problem as c
1
= 0.08
and c
2
= 30, the effect of ` in P
ACO
is investigated as shown in Figure 12. With the
increase in `, the values of P
ACO
in the all curves are converged to one.
When the value of ` is selected one (a random search), the P
ACO
decreases
significantly as the number of stages increases; while if ` is set to 2.5, as shown in
Figure 13 almost in the all stages P
ACO
remains very close to one.
In IACO, determining the exact value of ` or defining a mathematical relation to
qualify it may not be possible; however, it is obvious that IACO performs highly better
than a random search and as a result, a larger value can be considered for ` in the
Figure 11.
The relationship between
the number of function
evaluations and c
2
for the
welded beam problem
EC
27,1
176
presented algorithm. In addition, the effect of c
1
and c
2
in P
ACO
is more important
than ` and the result of IACO can be the actual optimum solution with a negligible
difference based on the values of c
1
and c
2
.
6.2 Effects of c
1
and c
2
in IACO
From Equation (19), it is clear that 0 < c
1
< 0.5. In the first stages in which there is
less information about the search space, it is necessary that c
1
has large value. Also, in
the last stages in which the aim of continuing the search process is to improve the
previous solutions (as a local search process), the large value for c
1
may perform more
appropriate than a small one. As shown in Figure 14, whatever c
1
is close to 0.5, nc as
well as the number of function evaluations increase. Instead, if c
1
is selected very small
probably the optimum solution is lost. Considering Figure 14, in this paper c
1
is set to
Figure 13.
The P
ACO
values for
various `
Figure 12.
The effect of ` on P
ACO
for various nc
An improved
ant colony
optimization
177
0.30 which needs only nc = 16 stages for the SOM process and the related P
ACO
is
close to one satisfactorily.
The amount of c
2
highly influences the IACO performance. If c
2
is too small, the
search process will end rapidly; thus the amount of P
ACO
will be small (considering
Equation (39)); on the contrary, if c
2
is selected too great, the IACO will perform
similarly to the original ACO algorithm and the effect of the SOM will be eliminated,
and a desirable solution cannot be obtained in less evaluations. In addition, c
2
can
greatly affect the optimization time. Because of a linear relationship between c
2
and
the number of evaluations (Equation (38)), it is expectable that a linear relationship
between c
2
and the required time for optimization is available. But for the welded
beam problem, it is not true (Figure 15). In the ACO-based algorithms, optimization
time is not only dependent on the number of function evaluations but also performing
operations on the pheromone vectors highly affects the optimization time due to the
fact that all the optimization operations such as updating procedures (both local and
global updates) and creating decision vector are based on the information stored in the
pheromone vectors. The size of pheromone vectors is c
2
in the IACO; and as a result, c
2
highly influences the optimization time. Considering Figure 15, a quadratic function
can describe the relation between c
2
and time optimization as
Figure 14.
The effect of c
1
on nc
(considering c
2
= 30,
nm
max
= 1e4)
Figure 15.
The optimization time-c
2
relation and the related
fitted curve
EC
27,1
178
time nc (w
3
(c
2
)
2
w
4
c
2
w
5
) (40)
where for the welded beam example, we have: w
3
= 3.3923e -4, w
4
= 1.6402e -4, and
w
5
= 0.0336. As shown in Equations (39) and (40), it seems c
2
= 30 is suitable.
6.3 Efficiency of IACO
IACO utilizing SOM can exchange a continuous problem to discrete one and continue
the search process until reaching a solution with the required accuracy. IACO contrary
to previous continuous ACO approaches does not change the ACO-based principles;
instead it utilizes SOM to make handling continuous problems possible. Therefore,
IACOhas capacity to deal with continuous as well as discrete problems.
In the ACO-based algorithms, the required memory for saving pheromone vectors,

d
i=1
nm
i
, and the largeness of the search space

d
i=1
nm
i
are related with each other.
Also, whatever the search space for an example is selected large, the required
generations to converge to a solution will be great. Using SOM, the total largeness of
the search space decreases to c
d
2
nc and in each time of optimization process, the
search space is c
d
2
and therefore the required generations are small values (Equation
(38)). In addition, the required memory for saving the pheromone vectors reduces to
d c
2
. Therefore, utilizing SOM in the ACO algorithm causes the decrease in the size
of the pheromone vectors, size of the search space and number of function evaluations.
Consequently, the optimization time can be reduced. For example for the welded beam
problem, by using Equation (40), the required time for the original ACO (c
1
= 0.5,
c
2
= nm
max
= 1.0e4 and nc = 1) is 3.39e4 seconds while it is 5.50 seconds for IACO
(c
1
= 0.3, c
2
= 30 and nc = 16). However, the probability of finding an optimum
solution is not decreased as shown in Equation (39).
SOM performs as a search-space-updating rule as indicated in Figure 16(a) showing
the search space for a variable in each stage of SOM which decreases with the increase
in the number of stages. Another advantage of SOM is illustrated in Figure 16(b),
which shows all the search points during the search process. The accumulation of
search points around the global optimum indicates that IACO performs more
investigation in the favorite space than other places, and as a result not only IACO can
be considered as a global search (in the initial stages of SOM), but also it works as a
local search in the final stages. Therefore, IACO can be combined with a local search
algorithm to improve the performance or it can be added to a global search technique
to work as a local search tool.
SOM has three parameters, c
1
, c
2
, and nc. Also, there are three functions in the
SOM, Equations (19), (39), and (40), known as the accuracy-, reliability-, and time-
relationships, respectively, which can be used to determine the parameters of SOM. By
these equations, two approaches for using SOMcan be adopted:
(1) given a fixed time to search, SOM can increase the quality of solutions found in
that time; and
(2) given a fixed solution quality, SOM can reduce the time required to find a
solution not worse than that quality.
7. Concluding remarks
In recent decades, the drawbacks of numerical methods and advantages of heuristic
algorithms have caused a considerable increase in applying heuristic methods. ACO, as
a relatively new heuristic approach, is used for solving various discrete optimization
An improved
ant colony
optimization
179
problems. Although ACO performs very well in small search domains, in large-scaled
problems it has some defects. Previously, some attempts were made to parallelize ACO
in order to improve its performance; however, it seems that parallelized approaches
alone cannot overcome the problems encountered in optimization of large structures.
On the other hand, other attempts were made to employ ACO for continuous search
domains, but none of themcould sufficiently performtheir role for this aim.
In this paper, an IACO for solving engineering problems consisting of continuous or
discrete search domains by using SOM is introduced. SOM, based on the principles of
finite element method, can be considered as the repetition of the following four steps
for definite times:
(1) calculate permissible bounds for each variable;
(2) determine the accuracy value for the variables;
(3) create the series of the allowable values for the variables; and
(4) determine the optimumsolution of the current stage.
Utilizing SOM in the ACO algorithm decreases the size of the pheromone vectors, size
of the decision vector, size of the search space, the number of function evaluations, and
finally the required optimization time. SOM performs as a search-space-updating rule,
and it can exchange discrete search and continuous search domains to each other.
Figure 16.
The performance of SOM
EC
27,1
180
There are three functions in the SOM, known as accuracy-, reliability- and time-
relationships, which can be used to determine the approaches of using SOM: given a
fixed time to search, increasing the quality of the solutions, or given a fixed solution
quality, reducing the optimization time. As a result, SOMhas the capacity to control the
computational cost and the quality of the solutions. Therefore, IACO has the capacity
to handle the continuous and discrete problems. Also, IACO can work as a global
search or as a local search depending on the request. The robustness of the proposed
method is illustrated by the comparisons based on several well-studied benchmark
engineering problems.
References
Abbaspour, K.C., Schulin, R. and van Genuchten, M.Th. (2001), Estimating unsaturated soil
hydraulic parameters using ant colony optimization, Advances in Water Resources,
Vol. 24, pp. 827-41.
Arora, J.S. (1989), Introduction to OptimumDesign, McGraw-Hill, NewYork, NY.
Belegundu, A.D. (1982), A Study of Mathematical Programming Methods for Structural
Optimization, PhD thesis, Department of Civil and Environmental Engineering,
University of Iowa, Iowa, IA.
Benkner, S., Doerner, K.F., Hartl, R.F., Kiechle, G. and Lucka, M. (2005), Communication
strategies for parallel cooperative ant colony optimization on clusters and grids,
Complimentary Proceedings of PARA04 Workshop on State-of-the-Art in Scientific
Computing, Lyngby, pp. 3-12.
Bilchev, G. and Parmee, I.C. (1995), The ant colony metaphor for searching continuous design
spaces, in Fogarty, T.C. (Ed.), Proceedings of the AISB Workshop on Evolutionary
Computation, Springer-Verlag, Berlin, LNCS, Vol. 993, pp. 25-39.
Bullnheimer, B., Kotsis, G. and Straub, C. (1998), Parallelization strategies for the Ant System,
in De Leone, R., et al. (Eds), High Performance Algorithms and Software in Nonlinear
Optimization, Kluwer Academic Publishers, Norwell, MA, pp. 87-100.
Camp, C. and Bichon, J. (2004), Design of space trusses using ant colony optimization, Journal of
Structural Engineering, ASCE, Vol. 130 No. 5, pp. 741-51.
Camp, C., Pezeshk, S. and Cao, G. (1998), Optimized design of two dimensional structures using
a genetic algorithm, Journal of Structural Engineering, ASCE, Vol. 124 No. 5, pp. 551-9.
Coello, C.A.C. (2000), Use of a self-adaptive penalty approach for engineering optimization
problems, Computers in Industry, Vol. 41, pp. 113-27.
Coello, C.A.C. and Montes, E.M. (2002), Constraint-handling in genetic algorithms through the
use of dominance-based tournament selection, Advanced Engineering Informatics, Vol. 16,
pp. 193-203.
Das, A. (2000), Optimal channel cross section with composite roughness, Journal of Irrigation
and Drainage Engineering-ASCE, Vol. 126 No. 1, pp. 68-72.
Deb, K. (1991), Optimal design of a welded beam via genetic algorithms, AIAA Journal, Vol. 29
No. 11, pp. 2013-5.
Deb, K. (2000), An efficient constraint handling method for genetic algorithms, Computer
Methods in Applied Mechanics and Engineering, Vol. 186, pp. 311-38.
Deb, K. and Gene, A.S. (1997), A robust optimal design technique for mechanical component
design, in Dasgupta, D. and Michalewicz, Z. (Eds), Evolutionary Algorithms in
Engineering Applications, Springer, Berlin, pp. 497-514.
Deneubourg, J.L. and Goss, S. (1989), Collective patterns and decision-making, Ethnology
Ecology and Evolution, Vol. 1, pp. 295-311.
An improved
ant colony
optimization
181
Dobbs, M.W. and Nelson, R.B. (1976), Application of optimality criteria to automated structural
design, AIAAJournal, Vol. 14 No. 10, pp. 1436-43.
Dorigo, M. (1992), Optimization, learning and natural algorithms, PhD thesis, Dip. Elettronica e
Informazione, Politecnico di Milano, Milano.
Dorigo, M. and Di Caro, G. (1999), Ant colony optimization: a newmeta-heuristic, Proceedings of
the 1999 Conference on Evolutionary Computation, Vol. 2, pp. 1470-7.
Dorigo, M. and Stutzle, T. (2004), Ant Colony Optimization, The MITPress, Cambridge, MA.
Dorigo, M., Maniezzo, V. and Colorni, A. (1996), The ant system: optimisation by a colony of
cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics. Part B,
Cybernetics, Vol. 26 No. 1, pp. 29-41.
Dreo, J. and Siarry, P. (2002), A new ant colony algorithm using the heterarchical concept aimed
at optimization of multiminima continuous functions, in Dorigo, M., Di Caro, G. and
Sampels, M. (Eds), Proceedings of the Third International Workshop on Ant Algorithms
(ANTS2002), Springer-Verlag, Berlin, LNCS, Vol. 2463, pp. 216-21.
Gellatly, R.A. and Berke, L. (1971), Optimal structural design, AFFDLTR- 70-165, Air Force
Flight Dynamics Lab., Wright- Patterson AFB, Dayton, OH.
Goss, S., Beckers, R., Deneubourg, J.L., Aron, S. and Pasteels, J.M. (1990), Howtrail laying and trail
following can solve foraging problems for ant colonies, in Hughes R.N. (Ed.), Behavioural
Mechanisms in Food Selection, NATO-ASI Series, Vol. G20, Berlin.
He, Q. and Wang, L. (2007), An effective co-evolutionary particle swarmoptimization for constrained
engineering design problems, Engineering Applications of Artificial Intelligence, Vol. 20,
pp. 89-99.
Jain, A., Bhattacharjya, R.K. and Sanaga, S. (2004), Optimal design of composite channels using
genetic algorithm, Journal of Irrigation and Drainage Engineering-ASCE, Vol. 130 No. 4,
pp. 286-95.
Kannan, B.K. and Kramer, S.N. (1994), An augmented Lagrange multiplier based method for
mixed integer discrete continuous optimization and its applications to mechanical design,
Transactions of the ASME, Journal of Mechanical Design, Vol. 116, pp. 318-20.
Kaveh, A. and Jahanshahi, M. (2008), Plastic limit analysis of frames using ant colony systems,
Computers and Structures, Vol. 86, pp. 1152-63.
Kaveh, A. and Shahrouzi, M. (2008), Dynamic selective pressure using hybrid evolutionary and
ant system strategies for structural optimization, International Journal for Numerical
Methods in Engineering, Vol. 73 No. 4, pp. 544-63.
Kaveh, A. and Shojaee, S. (2007), Optimal design of skeletal structures using ant colony
optimization, International Journal for Numerical Methods in Engineering, Vol. 70 No. 5,
pp. 563-81.
Kaveh, A. and Talatahari, S. (2008), A hybrid particle swarm and ant colony optimization for
design of truss structures, Asian Journal of Civil Engineering, Vol. 9 No. 4, pp. 329-48.
Kaveh, A., Farhmand Azar, B. and Talatahari, S. (2008a), Ant colony optimization for design of
space trusses, International Journal of Space Structures, Vol. 23 No. 3, pp. 167-81.
Kaveh, A., Hassani, B., Shojaee, S. and Tavakkoli, S.M. (2008b), Structural topology optimization
using ant colony methodology, Engineering Structures, Vol. 30 No. 9, pp. 2559-65.
Khan, M.R., Willmert, K.D. and Thornton, W.A. (1979), An optimality criterion method for large-
scale structures, AIAAJournal, Vol. 17 No. 7, pp. 753-61.
Lee, K.S. and Geem, Z.W. (2005), A new meta-heuristic algorithm for continuous engineering
optimization: harmony search theory and practice, Computer Methods in Applied
Mechanics and Engineering, Vol. 194, pp. 3902-33.
EC
27,1
182
Li, Y. and Chan Hilton, A.B. (2006), Reducing spatial sampling in long-term groundwater
monitoring using ant colony optimization, International Journal of Computational
Intelligence Research, Vol. 1 No. 1, pp. 19-28.
Li, L.I., Huang, Z.B., Liu, F. and Wu, Q.H. (2007), Aheuristic particle swarmoptimizer for optimization
of pin connected structures, Computers and Structures, Vol. 85, pp. 340-9.
Maier, H.R., Simpson, A.R., Zecchin, A.C., Foong, W.K., Phang, K.Y., Seah, H.Y. and Tan, C.L.
(2003), Ant colony optimization for design of water distribution systems, Journal of
Water Resources Planning and Management, Vol. 129 No. 3, pp. 200-9.
Middendorf, M., Reischle, F. and Schmeck, H. (2002), Multi colony ant algorithms, Journal of
Heuristics, Vol. 8 No. 3, pp. 305-20.
Montes, E.M. and Coello, C.A.C. (2008), An empirical study about the usefulness of evolution
strategies to solve constrained optimization problems, International Journal of General
Systems, Vol. 37 No. 4, pp. 443-73.
Nourani, V., Monadjemi, P., Talatahari, S. and Shahradfar, S. (2009), Application of ant colony
optimization to investigate velocity profile effect on optimal design of open channels,
Journal of Hydraulic Research (submitted for publication).
Piriyakumar, D.A.L. and Levi, P. (2002), A new approach to exploiting parallelism in ant colony
optimization, International Symposium on Micromechatronics and Human Science (MHS),
Nagoya, Proceedings, IEEEStandard Office, pp. 237-43.
Qie, H. and Wang, L. (2007), A hybrid particle swarm optimization with a feasibility-based rule for
constrained optimization, Applied Mathematics and Computation, Vol. 186, No. 2, pp. 1407-22.
Ragsdell, K.M. and Phillips, D.T. (1976), Optimal design of a class of welded structures using
geometric programming, ASME Journal of Engineering and Industry, Ser. B, Vol. 98 No. 3,
pp. 1021-5.
Rajeev, S. and Krishnamoorthy, C.S. (1992), Discrete optimization of structures using genetic
algorithms, Journal of Structural Engineering, ASCE, Vol. 118 No. 5, pp. 1233-50.
Rizzi, P. (1976), Optimization of multiconstrained structures based on optimality criteria, paper
presented at AIAA/ASME/SAE 17th Structures, Structural Dynamics, and Materials
Conference, King of Prussia, PA.
Sandgren, E. (1988), Nonlinear integer and discrete programming in mechanical design,
Proceedings of the ASME Design Technology Conference, Kissimine, FL, pp. 95-105.
Schmit, L.A. Jr and Farshi, B. (1974), Some approximation concepts for structural synthesis,
AIAAJournal, Vol. 12 No. 5, pp. 692-9.
Schmit, L.A. Jr and Miura, H. (1976), Approximation concepts for efficient structural synthesis,
NASACR-2552, NASA, Washington, DC.
Socha, K. and Dorigo, M. (2008), Ant colony optimization for continuous domains, European
Journal of Operational Research, Vol. 185, pp. 1155-73.
Venkayya, V.B. (1971), Design of optimum structures, Computers & Structures, Vol. 1 Nos. 1/2,
pp. 265-309.
Further reading
Horton, R.E. (1933), Separate roughness coefficients for channel bottom and sides, Engineering
News Record, Vol. 111 No. 22, pp. 652-3.
Corresponding author
A. Kaveh can be contacted at: alikaveh@iust.ac.ir
To purchase reprints of this article please e-mail: reprints@emeraldinsight.com
Or visit our web site for further details: www.emeraldinsight.com/reprints
Reproducedwith permission of thecopyright owner. Further reproductionprohibited without permission.

También podría gustarte