Está en la página 1de 7

Compufers & Chemical Engineering Vol. 8, No. 1, pp. l-7, 1984 00991354184 s3.w + .

w
Printed in Great Britain. Pergamon Press Ltd.

ON SOLVING LARGE SPARSE NONLINEAR


EQUATION SYSTEMS

HERN-SHANNCHEN~ and MARK A. STADTHJXRR*


Chemical Engineering Department, University of Illinois, Urbana, IL 61801, U.S.A.

(Received 24 September 1982; received for publication 4 May 1983)

Abstract-An approach toward solving large sparse nonlinear equation systems using a
modified Powell’s dogleg method is described. Schubert’s update is used either to update the
Jacobian, or to update the U factor of the Jacobian while holding the L factor constant. This
use of Schubert’s update differs from others since it is used in connection with a dogleg method
rather than the usual quasi-Newton method. Results on a set of standard test problems
indicate that this leads to an efficient and reliable method for sparse nonlinear problems.

Scope-There are many applications in which a large sparse system of nonlinear equations
must be solved. Some relevant applications in chemical engineering include the numerical
solution of partial differential equations by the finite difference approach, the simulation of
multi-stage separation processes by linearization [ 11,and process flowsheeting by the equation-
based approach [2]. This work has been motivated primarily by applications of the last two
types.
As discussed in more detail by Stadtherr & Hilton [2], a conimon concern in equation-based
process flowsheeting is supplying an initial guess good enough to assure convergence to the
solution. One way to ease such concerns is to use a sparse nonlinear equation solver that is
both efficient and has excellent global convergence properties. The work described here
represents progress toward this goal.
For full matrix problems, there have been several techniques [3-91 proposed to promote
convergence from a poor initial guess. All except Powell’s hybrid, or dogleg, method [9] are
variqtions of the continuation method and require the solution of a series of subproblems, even
in the case of a good initial guess. Powell’s method combines the good global convergence
properties of the steepest descent methods and the good local convergence properties of
Broyden’s quasi-Newton method [lo]. The result is a reliable and efficient method that has
proven quite successful [1 I].
Recently Chen & Stadtherr [12] described a modification of Powell’s method that makes
it even more efficient and reliable. In this paper we describe extensions of this modified Powell’s
method to the case in which a large sparse. problem must be solved. Of particular interest is
the method used to update the Jacobian. Can Schubert’s update, which has proven somewhat
unreliable when used in connection with a quasi-Newton method [13, 141, be used more
successfully in connection with a dogleg approach?

Conclusions and Sigticanc~The modified Powell’s dogleg method described by Chen &
Stadtherr [12] for full matrix problems can be effectively extended to sparse problems using
Schubert’s update either to update the Jacobian or to update the U factor of the Jacobian.
This indicates that while Schubert’s update may be unreliable when used as part of a
quasi-Newton solution method, as indicated by the results of Perkins & Sargent [13] and Mah
& Lin [14], it. can be used much more successfully in connection with the modified Powell’s
method. This is possible because the modified Powell’s method, a combination of ideas from
the quasi-Newton approach and the steepest descent approach, is much more reliable than the
quasi-Newton approach alone. Since the modified Powell’s method is efficient and has very
good global convergence properties, its application in connection with equation-based process
flowsheeting seems particularly attractive.

TPresent address: Union Carbide Corporation, South Charleston, WV 25303, U.S.A.


*Author to whom correspondence should be addressed.
CACE Vol. 8, No. I-A
1
2 H.-S. CHEN and N. A. STADTHFRR

BACKGROUND function and/or Jacobian routines have to share some


A detailed description of the modified Powell’s parameters with the calling program then these
method employed here is given by Chen & parameters must be passed via COMMON state-
Stadtherr[l2]. Before discussing its extension to ments, which is inconvenient and often a source of
sparse problems, we briefly outline this full matrix error; (2) The calling program has no controi over the
algorithm. ranges of acceptable values of the unknowns; (3) If
Given an initial estimate x,, of the solution, a the function evaluation requires the solution of an-
diagonal variable scaling matrix D,,an initial step other system of nonlinear equations then another
bound A, a convergence parameter 6, and a function copy of the nonlinear equation solver with a different
f(x) = 0 to be solved, proceed as follows: subroutine name is needed; (4) The function may
1. Calculate the Jacobian B by forward difference. become undefined for some values of the unknowns.
2. Calculate a diagonal function scaling matrix D,, These problems can be solved by using the reverse
and scale the Jacobian. D, is calculated so that the communication technique. The basic idea of the
largest element in each row of D,BD,-'has a mag- technique is that the nonlinear equation solver only
nitude of 1. suggests estimates of the solution, and when function
3. Decompose the scaled Jacobian into its L and evaluations are required, it returns control to the
U factors. calling program. Detailed discussions of the tech-
4. Calculate the search step pk and adjust pk if nique can be found in Krogh[l8] and Connet [19].
necessary. Powell’s dogleg formula[9] is used to com- When applying the modified Powell’s method to solve
pute pk. The initial value of the step bound A used in process flowsheeting problems, we find an imple-
this computation can be specified directly by the user, mentation that uses the reverse communication tech-
or calculated from another user-provided parameter nique to be much more convenient to use than the
as described in [12]. For problems in which all usual implementation.
variables must be nonnegative, pk is adjusted as
necessary to maintain the nonnegativity of the vari- SOLUTION OF SPARSE NONLINEAR SYSTEMS
ables. This is a user-selected option. Consider the solution of the n x n system of non-
5. Calculate f(x, + pk) and check for convergence. linear equations f(x) = 0, where n is large and the
Accept xk + pk as a solution if the convergence cri- Jacobian is sparse. When such large sparse nonlinear
teria are satisfied. equation systems arise in equation-based flowsheeting
6. Check for very slow convergence or non- they are typically solved used the Newton-Raphson
convergence. If little or no progress toward the (NR) method,.often in connection with some step-size
solution is detected the algorithm stops. relaxation scheme. The numerical studies of Stadtherr
7. Check whether it is necessary to calculate a new & Hilton [2], using the prototype equation-based
Jacobian. The Jacobian is re-evaluated occasionally flowsheeting system SEQUEL, also described in [2],
when the rate of convergence begins to slow consid- indicate that NR with step-size relaxation is a reason-
erably. If this is necessary, return to Step 1. ably reliable approach, at least when a relatively good
8. Define xk+ , and update the step bound A. The initial guess is supplied. An approach to improving the
next iterate x~+, is defined so as to maintain a global convergence properties of the NR method has
monotonical decrease in the norm of the scaled been suggested by Gorzcynski & Hutchison [20]. They
function Ofl. suggest that in computing a linearization during each
9. Update the L and U factors and return to Step iteration, a first-order linearization be blended in some
4. Broyden’s update formula[lO] is used, along with proportion with the standard second-order NR linear-
Bennett’s method[l5] for actually computing the ization. In general, the proportion of first-order linear-
modified triangular factors L and U. ization used is decreased as one proceeds toward the
The advantages of the modified Powell’s method solution. The idea is that the first-order linearization
are considered in detail in [12]. A recent study by will provide reliability at the expense of speed when far
Hiebert [1 1] (see [16] for more detailed conclusions from the solution, while the more accurate second-
than presented in [1 11) compared several nonlinear order linearization will provide greater speed as the
equation solvers on a set of standard test problems, solution is approached. The numerical studies in [2]
and found HYBRD, which implements a version of show that this so-called quasilinear approach does
Powell’s method, and is part of Argonne’s MIN- improve the success rate from poor initial guesses, but
PACK library, to be the best performer overall. The that considerable improvement is still needed.
numerical results presented in [ 121for the same set of As emphasized by Shacham et al. [21] a serious
test problems show that our implementation NEQLU limitation of the NR method in connection with
of the algorithm outlined above outperforms HY- equation-based flowsheeting is the need to calculate a
BRD both in terms of efficiency and reliability. A new Jacobian at each iteration. For some problems it
listing of the source code for NEQLU, along with is feasible to calculate the Jacobian analytically, al-
some sample problems, is available [ 171. though coding the analytical partial derivatives can be
At this point we would like to divert briefly to a a very time consuming and error prone process. For
discussion of the usual implementation of nonlinear other problems it may be necessary to calculate the
equation solvers. In a typical implementation, such as Jacobian using finite difference. The structure of the
in [ 171,a calling program is used to call the nonlinear sparse Jacobian for flowsheeting problems is such
equation solver and to pass as arguments external that this can be done without too much difficulty
routines for function and Jacobian evaluation. The using the technique of Curtis et al. [22]. The idea
nonlinear equation solver then takes full control and involved in this technique can be briefly demonstrated
performs calculations until it finds a solution or it using the occurrence matrix in Fig. 1. If we perturb
fails. The drawbacks of the scheme are: (1) If the variable x, we will change functions f, and f4. If we
On solving large sparse nonlinear equation systems 3

x x x x can be extended directly to the large, sparse matrix


I2 3 4 case except for Steps 1, 3 and 9. For these steps the
F X X following problems arise:
1. It is no longer feasible in Step 1 to evaluate the
Jacobian by a simple finite difference scheme, because
x x the n function evaluations required would be too ex-
pensive for large n. For sparse problems, this difficulty
F x x can easily be resolved using the finite difference scheme
3 discussed above of Curtis et al. [22] to calculate the
f x X
Jacobian initially and whenever required sub-
4 sequently. The user may also be able to supply an
analytical Jacobian.
Fig. 1. Occurrence matrix for example in text. 2. It is no longer feasible in Step 3 to solve the linear
system of equations by full-matrix LU decomposition,
perturb xX we will change functions fiand f3. Since because it requires n2 computer storages and 0(n3)
these two variables have no equations in common, we arithmetic operations.
can perturb variables x, and xg at the same time and 3. Broyden’s method for updating the Jacobian is
evaluate columns 1 and 3 of Jacobian using one func- not directly applicable in Step 9 because it will gener-
tion evaluation. Using this idea, a tridiagonal Ja- ate a full approximate Jacobian and thus requires ,*
cobian can be evaluated using only three function computer storages.
evaluations. For process flowsheeting problems, the The techniques used to overcome the first problem
number of function evaluations required to evaluate a have already been discussed above. We now proceed
Jacobian by this approach is typically a few multiples to discuss the second and third problems in more
of the number of components in the system. detail.
Because of the possible difficulty involved in calcu-
lating the Jacobian for flowsheeting problems there Solving large sparse linear systems
has been considerable interest in the use of quasi- To solve a large system of sparse linear equations
Newton methods [21]. In this case rather than re- efficiently one must of course use sparse matrix tech-
evaluate the Jacobian at each iteration, an approxi- niques. The basic ideas of sparse matrix techniques are
mate Jacobian is updated. For full matrix problems, to only store and operate on nonzero elements and to
the most popular quasi-Newton method is Broyden’s avoid too many new nonzeros (fill-ins) during Gaus-
[lo]. His update formula for approximating the Ja- sian elimination. For more background on the solu-
cobian can be applied directly to the Jacobian, or it tion of large sparse linear systems, especially those
can be put in a form allowing direct updating of the L arising in process flowsheeting problems, the reader is
and U factors of the Jacobian, thus avoiding the need referred to Stadtherr & Wood [24,25]. An extensive
to factor the Jacobian at each iteration. Broyden’s survey of sparse matrix techniques and their applica-
update is not applicable to large sparse systems, how- tions is given by Duff [26].
ever, because the approximate Jacobian it generates As discussed in more detail in [24], an important
will in general be full. decision in designing a sparse matrix program is
For sparse problems, the Broyden-type update pro- whether to use the one-pass approach or the two-pass
posed by Schubert [23] is well-known. In this case the approach. In the one-pass approach, the program
Jacobian is updated under the constraint that all alternates between selecting one pivot and performing
known constant coefficients be actually held constant. one Gaussian elimination step. The most popular
This is often implemented so that only the zero ele- method of this type is the Markowitz approach with
ments are actually held constant. A drawback in using threshold pivoting, a popular implementation of
Schubert’s update is that, unlike Broyden’s update, it which is the well known code MA28 [27], a part of the
cannot be applied directly to the factorization of the Harwell subroutine library. In the two-pass approach,
Jacobian. Thus when Schubert’s update is used, the a complete tentative pivot sequence is first selected
approximate Jacobian must be resolved at each iter- based on sparsity considerations alone, and this ten-
ation. Some approaches to avoiding this drawback are tative pivot sequence is then used, perhaps with some
discussed below. It should be noted that Perkins & modifications, in performing the actual Gaussian
Sargent [ 131, as well as Mah & Lin [14], have found elimination. The most popular reordering method for
quasi-Newton methods using Schubert’s update to be selecting the initial tentative pivot sequence is proba-
unreliable in comparison to the NR method. The work bly the P4 algorithm of Hellerman & Rarick [28].
presented here differs from this earlier work in that In a recent study by Stadtherr & Wood [24], it was
here Schubert’s update is incorporated in a dogleg found that for process flowsheeting matrices neither
method. the Markowitz nor the P4 technique works well. This
We now proceed to discuss methods for extending appears to be because some equations (material bal-
the modified Powell’s method described in [ 121to the ance equations) contain only a few variables while
sparse matrix case. As in full matrix problems, the goal other equations (equilibrium and energy balance
is to combine the desirable features of the quasi- equations) may contain several more variables. For
Newton approach discussed above with the desirable such matrices, the usual sparse matrix techniques
features of the steepest descent approach, thus obtain- tend to first choose pivots from the equations with
ing a method that is both efficient and very reliable. only a few elements, thus destroying the block struc-
ture inherent in these problems by taking successive
EXTENDING THE MODIFIED POWELL’S ALGORITHM pivots from several different blocks. Stadtherr &
The modified Powell’s algorithm outlined above Wood [24] suggest a block reordering algorithm to
4 H.-S. CHEN and M. A. STAD~R~~R

exploit the structure of process flowsheeting matrices. During LU factorization the approximate Jacobian
They first construct a block-occurrence matrix and and its U factor must be saved; (iii) The convergence
apply a P4-type reordering algorithm (SPKZ) to reor- of Schubert’s update is q-superlinear [30]. The follow-
der the block matrix. Then a simple reordering algo- ing two approaches avoid the drawback that an LU
rithm (SPKl) is applied to reorder the individual factorization is needed at every iteration.
equations and variables, while restricting the spike 2. In this case, Broyden’s update is used to update
selection to avoid destroying the block reordering the Jacobian, but in order to maintain sparsity the
originally identified. This algorithm (BLOKS) was rank one updates used are stored separately, as
found to be very efficient and to give consistently good suggested by Gallun & Holland [31]. Some character-
reorderings for flowsheeting problems. istics of this approach are: (i) An LU factorization is
In a related study [25] Stadtherr & Wood also only performed when a new Jacobian is calculated;
compared some algorithms for performing the actual (ii) During LU factorization both the L and U factors
elimination. A modified version of the NSPIV pro- of the Jacobian must be saved, (iii) Two extra vectors
gram [29], LUlSOL, was found to be among the of length n are needed each time the Jacobian is
most effective codes. It should be noted that the updated; (iv) The number of arithmetic operations
version of LUlSOL used in [25] did not attempt to required to obtain the correction step will increase
exploit the presence of irreducible blocks in the with the number of rank one updates stored; (v) The
matrix and thus failed to solve some of the larger test convergence of Broyden’s update is q-superlinear
problems due to storage limitations. An updated 1321.
version of LUlSOL does exploit this type of struc- 3. In this case, Schubert’s update is used on the U
ture, limiting fill-in to within the irreducible blocks, factor of the Jacobian while keeping the L factor
and thus in general permitting the solution of larger fixed, as suggested by Dennis & Marwil [33]. Some
problems. In NSPIV partial pivoting is used to characteristics of this approach are: (i) An LU fac-
maintain numerical stability. However, it does not torization is only performed when a new Jacobian is
take advantage of the a priori reordering scheme calculated; (ii). Both L and U factors of the Jacobian
used. In LUlSOL we change the pivoting strategy must be saved; (iii) The storage and arithmetic oper-
from partial pivoting to threshold pivoting to take ations will not be affected by the number of Jacobian
advantage of the a priori reordering. The per- updates, unlike the previous case; (iv) The con-
formance of the two-pass codes NSPIV and LUl- vergence of this update is also q-superlinear [33].
SOL, using the BLOKS reordering algorithm, as well We have employed both the first and third ap-
as the one-pass code MA28, on some process proaches discussed above. The performance of these
flowsheeting matrices is shown in Table 1. The results two approaches is compared below.
here are taken from Stadtherr & Wood [25] except for Before proceeding to some numerical results, we
the last run, which uses the update of LUlSOL summarize the actions taken to extend our modified
discussed above. It is clear from this table that we can Powell’s method to the solution of large sparse
solve fairly large systems of linear equations with problems.
reasonable CPU time. And since the largest problem 1. The finite difference approximation scheme of
can be solved with LUlSOL in roughly 100,000 Curtis et al. is used to evaluate the Jacobian initially
words of core, storage requirements are also reason- and whenever required subsequently.
able. As a final note we add that if the structure of 2. A two-pass approach is used to perform the LU
the L and U factors is saved when performing the first factorization required. For flowsheeting problems we
factorization of the Jacobian, then this information recommend the algorithm BLOKS [24] for the reor-
can be used to reduce the solution time required for dering phase.
subsequent factorizations of Jacobians with the same 3. Schubert’s update is used either to update the
structure. For instance, subsequent factorizations of Jacobian or its U factor.
the 4633 equations problem require only 2.013 set
with LUISOL.
NUMERICAL RESULTS
Updating the sparse Jacobian
In order to study the reliability and efficiency of the
The problem here is to maintain the sparsity of the
modified Powell’s method on sparse problems, we
Jacobian when it is updated. There are at least three
have used a number of relatively small test problems,
approaches for solving this problem:
and apply three versions of the method, differing in
1. Use Schubert’s update, as described above.
how the Jacobian is updated. Perkins & Sargent [13]
Some characteristics of this approach are: (i) An LU
have applied two versions of the NR method and
factorization must be performed at each iteration; (ii)
three versions of the quasi-Newton method using
Schubert’s update to these same problems. Thus we
Table 1. Solution times for some sparse nonlinear equation
can compare the reliability of these approaches to the
solvers on process flowsheeting problems. N is the number modified Powell’s method. We also make comparisons
of variables; NZ is the number of nonzero elements. Failures based on the efficiency and reliability of the different
were due to instieient core storage. Except for the largest methods used to update the Jacobian in the modified
problem, data is taken from Stadtherr & Wood [25] Powell’s method.
SIZE SOLUTION TIME (SEC1
MA28 NSPIV LUISOL Test problems
3Y2 I.633 0.548 6.384
584 3963 I.187 1.698 0.509
Five standard test problems, commonly used in
I@68 6254 3.700 0.775 0 546 comparisons of nonlinear equation solvers, are con-
1564 9369 6.002 1.445 0 981 sidered:
4633 33443 FAIL FAIL 4 681 1. Discrete boundary value problem.
On solving large sparse nonlinear equation systems 5

2. Broyden’s tridiagonal function. 1. Both versions of the NR method (methods 1


3. Broyden’s banded function. and 2) proved reliable on this set of test problems.
4. Extended Rosenbrock function. 2. All three versions of Perkins and Sargent’s
5. Extended Powell’s singular function. implementation of the quasi-Newton method using
Perkins and Sargent also use a sixth, less standard Schubert’s update (methods 3-5) failed on a large
problem. These are all variably dimensioned prob- number of problems. This is not inconsistent with the
lems, and each problem was tried with three different results of Mah & Lin [14], who also found that their
dimensions: 50,100 and 200 variables. Also in order to implementation of Schubert’s method became
test the performance of the methods considered on inefficient and unreliable when poor initial guesses
poor initial guesses, three starting points are used, were used.
x,, = x, (the “standard” starting point for each prob- 3. The version of the modified Powell’s method
lem), x,, = lox,, and x, = 100x,. that does not take the sparsity of the Jacobian into
account when updating the Jacobian (method 6)
Methods compared failed on problems 4 and 5. This is not surprising and
The five methods considered by Perkins and Sargent is partially because Broyden’s update treats these as
are: problems of dimensions 50-200, while actually these
1. Finite-difference Newton-Raphson with per- problems are made of smaller subproblems of dimen-
turbation sizes proportional to variable values. sion 2 or 4.
2. Finite difference Newton-Raphson method with 4. Both versions of the modified Powell’s method
perturbation sizes proportional to function values. that use Schubert’s method, either to update the
3. Quasi-Newton method with non-constant ele- Jacobian (method 7) or to update the U factor of the
ments of the sparse Jacobian updated by Schubert’s Jacobian (method 8) solved all of the problems. This
update. suggests that the low reliability of methods 3-5 is not
4. Quasi-Newton method with nonzero elements of due to the use of Schubert’s update, but is due to the
the sparse Jacobian updated by Schubert’s update. method used to determine the correction step, or
5. Same as 4 except the Jacobian is recalculated perhaps due to the implementation itself. The results
when convergence becomes sufficiently slow. here indicate that the modified Powell’s method,
Tne new methods considered are: which combines ideas from the quasi-Newton ap-
6. Modified Powell’s method with Jacobian up- proach and the steepest descent approach, is much
dated by Broyden’s method (full matrix update). more reliable than the quasi-Newton approach alone,
7. Modified Powell’s method with nonzero ele- thereby permitting a more successful application of
ments of the sparse Jacobian updated by Schubert’s Schubert’s update than has been reported elsewhere
method. [13, 141.
8. Modified Powell’s method with the sparse upper
triangular factor of the Jacobian matrix updated by E$ect of Jacobian updating scheme
Schubert’s method (L factor held constant). Table 3 shows how the Jacobian updating scheme
used in the modified Powell’s method affects the
Reliability number of iterations and the number of Jacobian
The reliability of the eight methods tested is sum- evaluations required for each problem. The following
marized in Table 2, which shows the number of fail- comments are in order:
ures of each method on these test problems. The re- 1. The dimensionality of the problem has very
sults for methods l-5 are from Perkins & Sargent [13], little effect on the number of iterations or Jacobian
as adjusted to account for the fact that they include a evaluations required to solve the problems.
sixth test problem. To assure a conservative adjust- 2. The method that updates the Jacobian by Schu-
ment, we assumed that methods 3-5 always failed in bert’s method usually gives convergence in the fewest
the sixth test problem and then reduced the number of number of iterations. However, in some applications,
failures reported in [13] accordingly. As a result the the method that updates the U factor of the Jacobian
number of failures shown in Table 2 for methods 3-5 may still be preferred because it requires far fewer LU
on these five problems is probably somewhat under- factorizations of the Jacobian.
stated. Regarding these results, the following com- 3. In our experience the NR method usually re-
ments are in order: quires roughly half as many iterations as the modified

Table 2. Number of failures on test problem set. Data for methods l-5 are taken from Perkins & Sargent
[13], adjusted as explained in the text

___WZ_ _~-___Q-+__5- _S-__-M;P_-_:6_


METHOD: 1

S.P.: X 8 0 3 3 3 6 @ 0

10x 8 0 6 6 6 6 0 0

IBOX B 0 6 9 6 6 8 B

OVERALL: 0 0 15 18 15 18 8 0
6 H.-S. CH~J and M. A. STADTHERR

Table 3. Efficiency of different approaches for updating the Jacobian in the modified Powell’smethod.
Results are given in terms of number of iterations followed by number of Jacobian evaluations
N: ______58 __-__- ____-_,a8 __-_- ______200_____
HETHOD:6 7 8 6 7 8 6 7 *

PRGSLEH: S.P.:

I X 4/I 4/i .5/l 4/I 4/l 5/l 4/1 4,) 5/f


10X 7/l 6/l 3/l 7/l S/l I,/, 7/l 5/l 12/l
,00x 23/2 16/1 22/2 2312 14/l 2913 23/2 13/l 25/3

2 X 12/I 7/1 8/l 12/l 7/l 8/l 12/l 7/1 8/l


10X 22/2 16,2 17/2 2112 l6/2 1712 22/2 ,6/2 17/2
,e0x 25/2 21/2 20/2 25/2 20/2 2112 25/2 2012 2!/2

3 X 18/l 16/l IS/2 18/l ,4/I 15/2 1*/t 14/l 15/2


,0X 24/2 21/2 25/2 2w2 22/2 23,2 25/2 23/2 2913
,00x 35/2 30/2 35/2 34/2 31/2 38/2 34/2 3117 36/3

4 X FAIL 8/l 9/2 FAIL 8/l 912 FAIL 8/l 912


10X FAIL 1312 Q/2 '3'2 9/2 '3'2 Q/2
IKlX FAIL 912 912 :::: Q/2 Q/2 :::: Q/2 9/L

5 I0E FAIL 38/2 53/4 FAIL 38/2 61,s FAIL 39/2 62,s
FAIL 4212 7216 FAIL 43/2 74,6 FAIL 4412 78/6
lE0X FAIL 47/2 74/S FAIL 48/2 78/6 FAIL 4912 7616

Powell’s method. If one were to try to choose between 2. M. A. Stadtherr 8 C. M. Hilton, Development of a new
the NR method and the modified Powell’s methods equation-based process flowsheeting system: Numerical
(methods 7 and 8) on the basis of overall numerical studies. In Selected Topics on Computer Aided Process
efficiency on large sparse problems, the choice would Design and Analysis (Edited by R. S. H. Mah & G. V.
Reklaitis), AIChE Symposium Series Vol. 78(214), p.
depend on the relative expense of function evalu-
12(1982).
ation, Jacobian evaluation, and LU factorization of
3. C. G. Broyden, A new method of solving simultaneous
the Jacobian. If both the evaluation and the fac- nonlinear equations. Comput. .Z. 12, 94 (1969).
torization of the Jacobian are relatively cheap, then 4. W. E. Borsage, Infinite dimensional iterative methods
the NR method would Seem preferable, assuming and applications. Publication 320-2347, IBM Scientific
that no convergence difficulties arise. If there are such Center, Houston, Texas (1968).
difficulties, it may be desirable to use the modified 5. P. T. Boggs, The solution of nonlinear systems of equa-
Powell’s method, which may be implemented so that tions by A-stable integration techniques. SIAM J. Nu-
the Jacobian is simply re-evaluated at each iteration, mer. Anal. 8, 767 (1971).
rather than updated with some quasi-Newton for- 6. S. N. Chow, J. Mallet-Paret & J. A. Yorke, Finding
zeros of maps: homotopy methods that are constructive
mula. This is feasible if the Jacobian is easy to with probability one. Math. Comput. 32, 887 (1978).
evaluate, and in general reduces the number of 7. D. F. Davidenko, On a new method of numerical solu-
iterations required by the modified Powell’s method. tion of systems of nonlinear equations. Doklady Akad.
If the Jacobian is relatively hard to evaluate but Nauk SSSR (N.S.) 88, 611 (1953).
cheap to factor, then the modified Powell’s method 8. J. Davis, The solution of nonlinear operalor equations
using Schubert’s update on the Jacobian (method 7) with critical points. Ph. D. Thesis. Oregon State Univer-
looks attractive. If the Jacobian is relatively hard to sity, Corvalis (1966).
factor, then the modified Powell’s method using 9. M. J. D. Powell, A hybrid method for nonlinear equa-
tions. In Numerical Methods for Nonlinear Algebraic
Schubert’s update on the U factor of the Jacobian
Equations (Edited by P. Rabinowitz). Gordon & Breach,
while holding the L factor constant [33] may be
New York (1970).
preferred. While it is not evident in this set of test IO. C. G. Broyden, A class of methods for solving nonlinear
problems, the modified Powell’s method in general simultaneous equations. Math. Comput. 19, 577 (1965).
exhibits much better global convergence properties 11. K. L. Hiebert, An evaluation of mathematical software
than the NR method. This must be of course also be that solves systems of nonlinear equations. Trans. Math.
considered in selecting a nonlinear equation solver. software, 8, 5 (1982).
The modified Powell’s method, as extended here to 12. H. S. Chen & M. A. Stadtherr, A modification of
sparse problems, has proved to be a reliable and Powell’s dogleg method for solving systems of nonlinear
efficient approach on the initial set of test problems equations. Comput. Chem. Engng 5, 143 (1981).
13. J. D. Perkins & R. W. H. Sargent, SPEEDUP: A com-
considered here. Numerical studies are currently un-
puter program for steady-state and dynamic simulation
derway, using the equation-based flowsheeting sys- and design of chemical processes. In Selected Topics on
tem SEQUEL [2], to determine the performance of Computer-Aided Process Design and Analysis (Edited by
the modified Powell’s method on realistically large R. S. H. Mah & G. V. Reklaitis), AIChE Symposium
flowsheeting problems. Series Vol. 78(214), p. 1 (1982).
14. R. S. H. Mah & T. D. Lin, Comparison of modified
Acknowledgemenrs-This work has been supported by the Newton’s methods. Compur. Chem. Engng 4, 75 (1980).
National Science Foundation under Grant CPE 80-12428. 15. J. M. Bennett, Triangular factors of modified matrices.
This work was also presented at the AIChE Annual Meet- Numer. Math. 7, 217 (1965).
ing, Los Angeles, November, 1982. 16. K. L. Hiebert, A comparison of software which solves
systems of nonlinear equations, Sandia Tech. Rep.
SAND 80-018 1. Sandia National Laboratories, Albu-
REFERENCES querque, New Mexico (1980).
1. M. A. Stadtherr & M. A. Malachowski, On efficient 17. H. S. Chen & M. A. Stadtherr, NEQLU-A Fortran
solution of complex systems of interlinked multistage subroutine for solving systems of nonlinear equations.
separators. Comput. Chem. Engng 6, 121 (1982). Submitted for publication (1982).
On solving large sparse nonlinear equation systems 7

18. F. T. Krogh, An integrator design. JPL Tech. Memo- 27. I. S. Duff&J. K. Reid, Some design features of a sparse
randum 33-479, Pasadena (1971). matrix code. Trans. Math. Sofrw&e, 5, 18 (1979).-
19. G. H. Connet, On the structure of zero finders. BIT, 17, 28. E. Hellerman & D. Rarick. The uartitioned ureassinned
170 (1977). pivot procedure (P4). In’ Spa&e Matrices- and ?heir
20. E. W. Gorczynski & H. P. Hutchison, Towards a quasi- Applications (Edited by D. J. Rose & R. A. Willoughby).
linear process simulator: I. Fundamental ideas. Compur. Plenum Press, New York (1972).
Chem. Engng 2, 189 (1978). 29. A. H. Sherman, Algorithm 533. NSPIV, A Fortran
21. M. Sacham, S. Macchietto, L. F. Stutzman & P. Ba- subroutine for sparse Gaussian elimination with partial
bcock, Equation oriented approach to process pivoting. Trans. Math. Software, 4, 391 (1978).
flowsheetine. Comout. Chem. En~nrr 6, 79 (1982). 30. E. Marwil, Convergence results for Schubert’s method
22. A. R. Cur&, M. J. D. Powell & J. K. Reid, Gn the for solving sparse nonlinear equations. SIAM J. Numer.
estimation of sparse Jacobian matrices. J. Inst. Math. Anal. 16, 488 (1979).
Appl. 13, 117 (1974). 31. S. E. Gallun & C. D. Holland, A modification of Bro-
23. L. K. Schubert, Modification of a quasi-Newton method yden’s method for the solution of sparse systems-with
for nonlinear equations with a sparse Jacobian. Math. application to distillation problems described by non-
Comput. 25, 27 (1970). ideal thermodynamic functions. Comput. Chem. Engng
24. M. A. Stadtherr & E. S. Wood, Sparse matrix methods 4, 33 (1980).
for equation-based chemical process flowsheeting-I. 32. C. G. Broyden, J. E. Dennis, Jr. & J. J. More, On the
Reordering phase. Compur. Chem. Engng 8, 9 (1984). local and superlinear convergence of quasi-Newton
25. M. A. Stadtherr & E. S. Wood, Sparse Matrix methods methods. J. Inst. Math. Appl. 12, 223 (1973).
for equation-based chemical process flowsheeting-II. 33. J. E. Dennis, Jr. $ E. S. Marwil, Direct secant update of
Numerical phase. Comput. Chem. Engng 8, 19 (1984). matrix factorizations. Math. Comput. 38, 459 (1982).
26. I. S. Duff, A survey of sparse matrix research. Proc.
IEEE, 65, 500 (1977).

También podría gustarte