Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Methods
10.1 Gaussian Elimination with Partial Pivoting
10.2 Iterative Methods for Solving Linear Systems
10.3 Power Method for Approximating Eigenvalues
10.4 Applications of Numerical Methods
Laboratory Experiment
Probabilities (p. 520)
where k is an integer and the mantissa M satisfies the inequality 0.1 ≤ M < 1.
For example, the floating point forms of some real numbers are listed below.
Real Number Floating Point Form
527 0.527 × 103
−3.81623 −0.381623 × 101
0.00045 0.45 × 10−3
827,000 0.827 × 106
1 0.1 × 101
63.61 0.6361 × 10 2
−2200 −0.22 × 10 4
The number of decimal places that can be stored in the mantissa depends on the
computer. If n places are stored, then it is said that the computer stores n significant
digits. Numbers with additional digits are either truncated or rounded. When a number
is truncated to n significant digits, all digits after the first n significant digits are simply
omitted. For example, truncated to two significant digits, the number 0.1251 becomes
0.12.
If a number is rounded to n significant digits, then the last retained digit is
increased by 1 when the discarded portion is greater than half a digit, and the last
retained digit is not changed when the discarded portion is less than half a digit. For
the special case in which the discarded portion is precisely half a digit, round so that
the last retained digit is even. For example, the numbers below are rounded to two
significant digits.
Number Rounded Number
0.1249 0.12
0.125 0.12
0.1251 0.13
0.1349 0.13
0.135 0.14
0.1351 0.14
Most computers store numbers in binary form (base 2) rather than decimal form (base
10). Although rounding occurs in both systems, this discussion is restricted to the more
familiar base 10. When a computer truncates or rounds a number, it introduces a rounding
error that can affect subsequent calculations. The result after rounding or truncating is
called the stored value.
Determine the stored value of each real number in a computer that rounds to three
significant digits.
a. 54.7 b. 0.1134 c. −8.2256 d. 0.08335 e. 0.08345
solution
Number Floating Point Form Stored Value
a. 54.7 0.547 × 102 0.547 × 102
b. 0.1134 0.1134 × 100 0.113 × 100
c. −8.2256 −0.82256 × 101 −0.823 × 101
d. 0.08335 0.8335 × 10−1 0.834 × 10−1
e. 0.08345 0.8345 × 10−1 0.834 × 10−1
Note in parts (d) and (e) that when the discarded portion of a decimal is precisely half
a digit, the number is rounded so that the stored value ends in an even digit.
Rounding error tends to propagate when using arithmetic operations. The next
example illustrates this phenomenon.
A= [0.12
0.12
0.23
0.12]
rounding each intermediate calculation to two significant digits. Then find the exact
value, rounded to two significant digits, and compare the two results.
solution
Rounding each intermediate calculation to two significant digits produces
∣∣
The exact value is A = 0.0144 − 0.0276 = −0.0132. So, to two significant digits,
the determinant is −0.013. Note that rounding at the intermediate steps produced a
determinant that is not correct to two significant digits, even though each arithmetic
operation was performed with two significant digits of accuracy. This is what is meant
when it is said that arithmetic operations tend to propagate rounding error.
[ ]
0.143 0.357 2.01 −5.17
−1.31 0.911 1.99 −5.46 .
11.2 −4.30 −0.605 4.42
Applying Gaussian elimination produces
[ ]
1.00 2.50 14.1 −36.2 Dividing the first row
−1.31 0.911 1.99 −5.46 by 0.143 produces a
11.2 −4.30 −0.605 4.42 new first row.
[ ]
1.00 2.50 14.1 −36.2 Adding 1.31 times the
0.00 4.19 20.5 −52.9 first row to the second row
11.2 −4.30 −0.605 4.42 produces a new second row.
[ ]
1.00 2.50 14.1 −36.2 Adding −11.2 times the
0.00 4.19 20.5 −52.9 first row to the third row
0.00 −32.3 −159 409 produces a new third row.
[ ]
1.00 2.50 14.1 −36.2 Dividing the second row
0.00 1.00 4.89 −12.6 by 4.19 produces a new
0.00 −32.3 −159 409 second row
[ ]
1.00 2.50 14.1 −36.2 Adding 32.3 times the
0.00 1.00 4.89 −12.6 second row to the third row
0.00 0.00 −1.00 2.00 produces a new third row.
[ ]
1.00 2.50 14.1 −36.2 Multiplying the third row
0.00 1.00 4.89 −12.6 . by −1 produces a new
0.00 0.00 1.00 −2.00 third row.
Using back-substitution, you obtain
x3 = −2.00
x2 = −2.82
x1 = −0.900.
Check this “solution” in the original system of equations to see that it is not correct.
Verify that the correct solution is
x1 = 1, x2 = 2, x3 = −3.
[ ]
0.143 0.357 2.01 −5.17
−1.31 0.911 1.99 −5.46
11.2 −4.30 −0.605 4.42
has entries whose absolute values increase roughly by powers of 10 as you move down
the column. In subsequent elementary row operations, you multiplied the first row by
1.31 and −11.2, and you multiplied the second row by 32.3. When using floating point
arithmetic, such large row multipliers tend to propagate rounding error. For example,
notice what happened during Gaussian elimination when you multiplied the first row
by 1.31 and added to the second row:
[ ]
1.00 2.50 14.1 −36.2
−1.31 0.911 1.99 −5.46
11.2 −4.30 −0.605 4.42
[ ]
1.00 2.50 14.1 −36.2 Adding 1.31 times the
0.00 4.19 20.5 −52.9 first row to the second row
11.2 −4.30 −0.605 4.42 produces a new second row.
The second, third, and fourth entries in the second row each lost one decimal place of
accuracy. Also, notice what happened when you multiplied the first row by −11.2 and
added to the third row:
[ ]
1.00 2.50 14.1 −36.2
0.00 4.19 20.5 −52.9
11.2 −4.30 −0.605 4.42
[ ]
1.00 2.50 14.1 −36.2 Adding −11.2 times the
0.00 4.19 20.5 −52.9 first row to the third row
0.00 −32.3 −159 409 produces a new third row.
The second entry in the third row lost one decimal place of accuracy, the third entry in
the third row lost three decimal places of accuracy, and the fourth entry in the third row
lost two decimal places of accuracy.
This type of error propagation can be lessened by appropriate row interchanges
that produce smaller multipliers. One way to restrict the size of the multipliers is to use
Gaussian elimination with partial pivoting.
Example 4 on the next page shows what happens when this partial pivoting
technique is used on the system of linear equations from Example 3.
Use Gaussian elimination with partial pivoting to solve the system from Example 3.
After each intermediate calculation, round the result to three significant digits.
solution
As in Example 3, the augmented matrix for this system is
[ ]
0.143 0.357 2.01 −5.17
−1.31 0.911 1.99 −5.46 .
11.2 −4.30 −0.605 4.42
Pivot
In the first column, 11.2 is the pivot because it has the largest absolute value. So,
interchange the first and third rows and apply elementary row operations.
[ ]
11.2 −4.30 −0.605 4.42 Interchange the
−1.31 0.911 1.99 −5.46 first and third
0.143 0.357 2.01 −5.17 rows.
[ ]
1.00 −0.384 −0.0540 0.395 Dividing the first row
−1.31 0.911 1.99 −5.46 by 11.2 produces a new
0.143 0.357 2.01 −5.17 first row.
[ ]
1.00 −0.384 −0.0540 0.395 Adding 1.31 times the first
0.00 0.408 1.92 −4.94 row to the second row
0.143 0.357 2.01 −5.17 produces a new second row.
[ ]
1.00 −0.384 −0.0540 0.395 Adding −0.143 times the
0.00 0.408 1.92 −4.94 first row to the third row
0.00 0.412 2.02 −5.23 produces a new third row.
Pivot
This completes the first pass. For the second pass, consider the submatrix formed by
deleting the first row and first column. In this matrix the pivot is 0.412, so interchange
the second and third rows and proceed with Gaussian elimination.
[ ]
1.00 −0.384 −0.0540 0.395 Interchange the
0.00 0.412 2.02 −5.23 second and third
0.00 0.408 1.92 −4.94 rows.
REMARK
[ ]
1.00 −0.384 −0.0540 0.395 Dividing the second row
Note that the row multipliers 0.00 1.00 4.90 −12.7 by 0.412 produces a new
used in Example 4 are 1.31, 0.00 0.408 1.92 −4.94 second row.
−0.143, and −0.408, whereas
[ ]
the multipliers used in Example 3 1.00 −0.384 −0.0540 0.395 Adding −0.408 times the
are 1.31, −11.2, and 32.3. 0.00 1.00 4.90 −12.7 second row to the third row
0.00 0.00 −0.0800 0.240 produces a new third row.
This completes the second pass, and the entire procedure can be completed by dividing
the third row by −0.0800.
[ ]
1.00 −0.384 −0.0540 0.395
0.00 1.00 4.90 −12.7
0.00 0.00 1.00 −3.00
So, x3 = −3.00, and back-substitution produces x2 = 2.00 and x1 = 1.00, which
agrees with the exact solution of x1 = 1, x2 = 2, and x3 = −3.
The term partial in partial pivoting refers to the fact that in each pivot search, you
consider only entries in the first column of the coefficient matrix or submatrix. This
search can be extended to include every entry in the coefficient matrix or submatrix;
the resulting technique is called Gaussian elimination with complete pivoting.
Unfortunately, neither complete pivoting nor partial pivoting solves all problems of
rounding error. Some systems of linear equations, called ill-conditioned systems, are
extremely sensitive to numerical errors. For such systems, pivoting is not much help.
A common type of system of linear equations that tends to be ill-conditioned is one
for which the determinant of the coefficient matrix is nearly zero. The next example
illustrates this problem.
[11 1
1.002
0
20 ]
[10 1
0.002
0
20 ]
[10 1 0
1.00 10,000 ]
So, y = 10,000 and back-substitution produces
x = −y
= −10,000.
This “solution” represents a percentage error of 25% for both the x-value and the
y-value. Note that this error was caused by a rounding error of only 0.0005, when
1.0025 was rounded to 1.002.
10.1 Exercises
Floating Point Form In Exercises 1–8, express the real Solving an ill-Conditioned System In Exercises 25
number in floating point form. and 26, use Gaussian elimination to solve the
1. 1824 2. 321.61 3. −2.62 4. −21.001 ill-conditioned system of linear equations, rounding each
1 intermediate calculation to three significant digits. Then
5. −0.00121 6. 0.00026 7. 8 8. 61 12
compare this solution with the exact solution provided.
Finding Stored Values In Exercises 9–16, determine 25. x + y = 3
the stored value of the real number in a computer 31
x + 30 y = −50
that rounds to (a) three significant digits and (b) four
significant digits. (Exact: x = 1593, y = −1590)
9. 4413 10. 21.4 11. −92.646 12. 216.964 26. x − 800
801 y = 10
7
13. 16 7
14. 32 15. 17 16. 16 −x + y = 50
(Exact: x = 48,010, y = 48,060)
Propagation of Rounding Error In Exercises 17 and
18, evaluate the determinant of the matrix, rounding 27. Comparing ill-Conditioned Systems Solve each
each intermediate calculation to three significant digits. ill-conditioned system and compare the solutions.
Then find the exact value and compare the two results. (a) x + y = 2 (b) x + y= 2
17. [66.00
1.24 56.00
1.02
] 18. [2.12
1.07 ]
4.22
2.12
x + 1.0001y = 2 x + 1.0001y = 2.0001
and substitute these values of xi on the right-hand sides of the rewritten equations
to obtain the first approximation. After obtaining the first approximation, you have
performed one iteration. In the same way, form the second approximation by substituting
the first approximation’s x-values on the right-hand sides of the rewritten equations. By
repeated iterations, you form a sequence of approximations that often converges to the
actual solution. Example 1 illustrates the use of the Jacobi method.
Use the Jacobi method to approximate the solution of the system of linear equations below.
5x1 − 2x2 + 3x3 = −1
−3x1 + 9x2 + x3 = 2
2x1 − x2 − 7x3 = 3
Continue the iterations until two successive approximations are identical when rounded
to three significant digits.
solution
To begin, rewrite the system in the form
x1 = − 15 + 25 x2 − 35 x3
2
x2 = 9 + 39 x1 − 19 x3
x3 = − 37 + 27 x1 − 17x2.
You do not yet know the solution, so choose
x1 = 0, x2 = 0, x3 = 0 Initial approximation
n 0 1 2 3 4 5 6 7
x1 0.000 −0.200 0.146 0.191 0.181 0.185 0.186 0.186
x2 0.000 0.222 0.203 0.328 0.332 0.329 0.331 0.331
x3 0.000 −0.429 −0.517 −0.416 −0.421 −0.424 −0.423 −0.423
The last two columns in the table are identical, so you can conclude that to three
significant digits the solution is
x1 = 0.186, x2 = 0.331, x3 = −0.423.
For the system of linear equations in Example 1, the Jacobi method converges.
That is, repeated iterations result in approximations that are identical to a specified
number of significant digits. As is generally true for iterative methods, greater accuracy
would require more iterations.
Use the Gauss-Seidel iteration method to approximate the solution of the system of
equations in Example 1.
solution
As in Example 1, use the system of equations rewritten in the form
x1 = − 15 + 25 x 2 − 35 x 3
2 3 1
x 2 = 9 + 9 x1 − 9 x 3
x 3 = − 37 + 27 x1 − 17 x 2.
The first computation is identical to that in Example 1. That is, using (x1, x2, x3) = (0, 0, 0)
as the initial approximation, you obtain the new value of x1.
x1 = − 15 + 25 (0) − 35 (0) = −0.200
Now that you have a new value of x1, use it to compute a new value of x2. That is,
x2 = 29 + 39 (−0.200) − 19 (0) ≈ 0.156.
Similarly, use x1 = −0.200 and x2 = 0.156 to compute a new value of x3. That is,
x3 = − 37 + 27 (−0.200) − 17 (0.156) = −0.508.
So, the first approximation is x1 = −0.200, x2 = 0.156, and x3 = −0.508. Now,
performing the next iteration produces
x1 = − 15 + 25 (0.156) − 35 (−0.508) ≈ 0.167
2 3 1
x2 = 9 + 9 (0.167) − 9 (−0.508) ≈ 0.334
3 2 1
x3 = −7 + 7 (0.167) − 7 (0.334) ≈ −0.429.
Continued iterations produce the sequence of approximations shown in the table.
n 0 1 2 3 4 5 6
x1 0.000 −0.200 0.167 0.191 0.187 0.186 0.186
x2 0.000 0.156 0.334 0.334 0.331 0.331 0.331
x3 0.000 −0.508 −0.429 −0.422 −0.422 −0.423 −0.423
Note that after only six iterations of the Gauss-Seidel method, you achieved the
same accuracy as was obtained with seven iterations of the Jacobi method in Example 1.
An Example of Divergence
n 0 1 2 3 4 5 6 7
x1 0 −4 −34 −174 −1224 −6124 −42,874 −214,374
x2 0 −6 −34 −244 −1224 −8574 −42,874 −300,124
n 0 1 2 3 4 5
x1 0 −4 −174 −6124 −214,374 −7,503,124
x2 0 −34 −1224 −42,874 −1,500,624 −52,521,874
You will now look at a special type of coefficient matrix A, called a strictly
diagonally dominant matrix, for which it is guaranteed that both the Jacobi method
and the Gauss-Seidel method will converge.
Which of the systems of linear equations shown below has a strictly diagonally
dominant coefficient matrix?
a. 3x1 − x2 = −4 b. 4x1 + 2x2 − x3 = −1
2x1 + 5x2 = 2 x1 + 2x3 = −4
3x1 − 5x2 + x3 = 3
solution
a. The coefficient matrix
−1
A= [32 5 ]
∣∣ ∣ ∣ ∣∣ ∣∣
is strictly diagonally dominant because 3 > −1 and 5 > 2 .
b. The coefficient matrix
[ ]
4 2 −1
A= 1 0 2
3 −5 1
is not strictly diagonally dominant because the entries in the second and third rows
do not conform to the definition. For example, in the second row, a21 = 1, a22 = 0,
∣ ∣ ∣ ∣ ∣ ∣
and a23 = 2, and it is not true that a22 > a21 + a23 . Interchanging the second
and third rows in the original system of linear equations, however, produces the
coefficient matrix
[ ]
4 2 −1
A′ = 3 −5 1 ,
1 0 2
which is strictly diagonally dominant.
The next theorem, stated without proof, tells you that a strictly diagonally
dominant coefficient matrix assures the convergence of either the Jacobi method or the
Gauss-Seidel method.
In Example 3, you looked at a system of linear equations for which the Jacobi and
Gauss-Seidel methods diverge. The next example shows that by interchanging the rows
of the system in Example 3, you obtain a coefficient matrix that is strictly diagonally
dominant. After this interchange, convergence is assured by Theorem 10.1.
Interchange the rows of the system in Example 3 to obtain a system with a strictly
diagonally dominant coefficient matrix. Then apply the Gauss-Seidel method to
approximate the solution to four significant digits.
solution
Begin by interchanging the two rows of the system to obtain
7x1 − x2 = 6
x1 − 5x2 = −4.
Note that the coefficient matrix of this system is strictly diagonally dominant. Then
solve for x1 and x2, as shown below.
x1 = 67 + 17 x2
x2 = 45 + 15 x1
Using the initial approximation (x1, x2) = (0, 0), obtain the sequence of approximations
shown in the table.
REMARK
Do not conclude from
Theorem 10.1 that strict n 0 1 2 3 4 5
diagonal dominance is a x1 0.0000 0.8571 0.9959 0.9999 1.000 1.000
necessary condition for
convergence of the Jacobi or x2 0.0000 0.9714 0.9992 1.000 1.000 1.000
Gauss-Seidel methods (see
Exercise 25 and 26).
So, the solution is x1 = 1 and x2 = 1.
c2 = 0.25(60 + c3 + 40 + 40)
c3 = 0.25(c1 + 70 + 50 + c2) 80 90
Slaven/Shutterstock.com
10.2 Exercises
The Jacobi Method In Exercises 1–6, apply the Jacobi Interchanging Rows to Attain Convergence In
method to the system of linear equations, using the Exercises 21–24, interchange the rows of the system of
initial approximation (x1, x2, . . . , xn) = (0, 0, . . . 0). linear equations in the stated exercise to obtain a system
Continue performing iterations until two successive with a strictly diagonally dominant coefficient matrix.
approximations are identical when rounded to three Then apply the Gauss-Seidel method to approximate the
significant digits. solution to two significant digits.
1. 3x − x2 = 2 2. −4x1 + 2x2 = −6 21. Exercise 13 22. Exercise 14
1
x1 + 4x2 = 5 3x1 − 5x2 = 1 23. Exercise 15 24. Exercise 16
3. −5x1 + x2 = −14 4. 7x + 4x2 = −4
1 Showing Convergence In Exercises 25 and 26, the
x1 − 12x2 = 8 4x1 + 7x2 = 7
coefficient matrix of the system of linear equations is
5. 2x1 − x2 = 2 not strictly diagonally dominant. Show that the Jacobi
x1 − 3x2 + x3 = −2 and Gauss-Seidel methods converge using an initial
−x1 + x2 − 3x3 = −6 approximation of (x1, x2, . . . , xn) = (0, 0, . . . , 0).
6. 4x1 + x2 + x3 = 7 25. −4x1 + 5x2 = 1 26. 4x1 + 2x2 − 2x3 = 0
x1 − 7x2 + 2x3 = −2 x1 + 2x2 = 3 x1 − 3x2 − x3 = 7
3x1 + 4x3 = 11 3x1 − x2 + 4x3 = 5
The Gauss-Seidel Method In Exercises 7–12, apply the True or False? In Exercises 27–29, determine whether
Gauss-Seidel method to the system of linear equations in each statement is true or false. If a statement is true, give
the stated exercise. a reason or cite an appropriate statement from the text.
7. Exercise 1 8. Exercise 2 If a statement is false, provide an example that shows the
9. Exercise 3 10. Exercise 4 statement is not true in all cases or cite an appropriate
statement from the text.
11. Exercise 5 12. Exercise 6
27. The Jacobi method converges when it produces repeated
Showing Divergence In Exercises 13–16, show iterations that are identical to a specified number of
that the Jacobi and Gauss-Seidel methods diverge significant digits.
for the system using the initial approximation 28. If the Jacobi method or the Gauss-Seidel method
(x1, x2, . . . , xn ) = (0, 0, . . . , 0). diverges, then the system has no solution.
13. x1 − 2x2 = −1 29. If a matrix A is strictly diagonally dominant, then the
2x1 + x2 = 3 system of linear equations Ax = b has no unique solution.
14. −x1 + 4x2 = 1
3x1 − 2x2 = 2 30. CAPSTONE Consider the system
15. 2x1 − 3x2 = −7 ax1 − 5x2 + 2x3 = 19
x1 + 3x2 − 10x3 = 9 3x1 + bx2 − x3 = −1
3x1 + x3 = 13 −2x1 + x2 + cx3 = 9.
16. x1 + 3x2 − x3 = 5
(a) Describe all the values of a, b, and c that will allow
3x1 − x2 =5
you to use the Jacobi method to approximate the
x2 + 2x3 = 1
solution of this system.
Strictly Diagonally Dominant Matrices In Exercises (b) Describe all the values of a, b, and c that will
17–20, determine whether the matrix is strictly guarantee that the Jacobi and Gauss-Seidel
diagonally dominant. methods converge.
−1 −2
[ ] [ ]
2 1
17. 18.
3 5 0 1 31. In Exercise 30, let a = 8, b = 1, and c = 4. What can
[ ] [ ]
12 6 0 7 5 −1 you determine about the convergence or divergence of
19. 2 −3 2 20. 1 −4 1 the Jacobi and Gauss-Seidel methods? Explain.
0 6 13 0 2 −3
[ ]
λ2 = 2, and λ3 = 1. 2 0 0
A= [ 1 0
0 −1 ]
B = 0 2 0
0 0 1
A= [21 −12
−5]
.
solution
From Example 4 in Section 7.1, the characteristic polynomial of A is
λ2 + 3λ + 2 = (λ + 1)(λ + 2). So, the eigenvalues of A are λ1 = −1 and λ2 = −2,
of which the dominant one is λ2 = −2. From the same example, the dominant
eigenvectors of A (those corresponding to λ2 = −2) are of the form
A= [21 −12
−5]
.
solution
Begin with an initial nonzero approximation of
x0 = [11].
Then obtain the approximations shown below.
Iteration “Scaled” Approximation
2 −12 −10
x1 = Ax0 =
1 −5 [ ][ ] [
1
1
=
−4] −4 [2.50
1.00]
[31]
which is a dominant eigenvector of the matrix A from Example 1. In this case, the
dominant eigenvalue of the matrix A is known to be λ = −2. When the dominant
eigenvalue of A is unknown, however, you can use the next theorem, which gives a
formula for determining the eigenvalue corresponding to an eigenvector. This theorem
is credited to the English physicist John William Rayleigh (1842–1919).
proof
x is an eigenvector of A, so it follows that Ax = λx and
Ax ∙ x λx ∙ x λ(x ∙ x)
= = = λ.
x∙x x∙x x∙x
In cases for which the power method generates a good approximation of a dominant
eigenvector, the Rayleigh quotient provides a correspondingly good approximation of the
dominant eigenvalue. Example 3 demonstrates the use of the Rayleigh quotient.
Use the result of Example 2 to approximate the dominant eigenvalue of the matrix
A= [21 −12
−5]
.
solution
In Example 2, the sixth iteration of the power method produced
x6 = [568
190] ≈ 190[
1.00]
2.99
.
Complete six iterations of the power method with scaling to approximate a dominant
eigenvector, and corresponding dominant eigenvalue, of the matrix
[ ]
1 2 0
A = −2 1 2 .
1 3 1
Use
[]
1
x0 = 1
1
as the initial approximation of the dominant eigenvector.
solution
One iteration of the power method produces
[ ][ ] [ ]
1 2 0 1 3
Ax0 = −2 1 2 1 = 1
1 3 1 1 5
and by scaling you obtain the approximation
[] [ ]
3 0.60
1
x1 = 1 = 0.20 .
5
5 1.00
REMARK
A second iteration yields
Note that the scaling factors
[ ][ ] [ ]
used to obtain the vectors in 1 2 0 0.60 1.00
the table Ax1 = −2 1 2 0.20 = 1.00
x1 5.00 1 3 1 1.00 2.20
x2 2.20 and
x3 2.80
[ ] [ ]
1.00 0.45
x4 3.13 1
x2 = 1.00 ≈ 0.45 .
x5 3.03 2.20
2.20 1.00
x6 3.00
Continuing this process produces the sequence of approximations shown in the table.
approach the dominant
eigenvalue λ = 3.
x0 x1 x2 x3 x4 x5 x6
[ ] [ ] [ ] [ ] [ ] [ ][ ]
1.00 0.60 0.45 0.48 0.50 0.50 0.50
1.00 0.20 0.45 0.55 0.51 0.50 0.50
1.00 1.00 1.00 1.00 1.00 1.00 1.00
[ ]
0.50
x = 0.50 .
1.00
Then use the Rayleigh quotient to approximate the dominant eigenvalue of A to be
λ = 3. (For this example, check that the approximations of x and λ are exact.)
proof
A is diagonalizable, so from Theorem 7.5 it has n linearly independent eigenvectors
x1, x2, . . . , xn with corresponding eigenvalues of λ1, λ2, . . . , λn. Assume that these
eigenvalues are ordered so that λ1 is the dominant eigenvalue (with a corresponding
eigenvector of x1). The n eigenvectors x1, x2, . . . , xn are linearly independent, so they
must form a basis for Rn. For the initial approximation x0, choose a nonzero vector such
that the linear combination x0 = c1x1 + c2x2 + . . . + cnxn has a nonzero leading
coefficient c1. (When c1 = 0, the power method may not converge to the dominant
eigenvector, and a different x0 must be used as the initial approximation. See Exercises
19 and 20.) Now, multiplying both sides of this equation by A produces
Ax0 = A(c1x1 + c2x2 + . . . + cnxn)
= c1(Ax1) + c2(Ax2) + . . . + cn(Axn )
= c1(λ1x1) + c2(λ2x2) + . . . + cn(λnxn).
Repeated multiplication of both sides of this equation by A produces
Akx0 = c1(λk1x1) + c2(λk2x2) + . . . + cn(λknxn)
which implies that
λ2 λ k
[ (λ ) x ( ) ]
k
Akx0 = λ1k c1x1 + c2 + . . . + cn n xn .
1
2
λ1
Now, from the original assumption that λ1 is larger in absolute value than the other
eigenvalues, it follows that each of the fractions
λ2 λ3 λ
, , . . . , n
λ1 λ1 λ1
is less than 1 in absolute value. So, as k approaches infinity, each of the factors
λ2 k λ3 k λn
( ) ( ) ( )
k
, , . . . ,
λ1 λ1 λ1
must approach 0. This implies that the approximation Akx0 ≈ λk1c1x1, c1 ≠ 0, improves
as k increases. x1 is a dominant eigenvector, so it follows that any scalar multiple of x1
is also a dominant eigenvector, which shows that Akx0 approaches a multiple of the
dominant eigenvector of A.
The proof of Theorem 10.3 provides some insight into the rate of convergence of
the power method. That is, if the eigenvalues of A are ordered so that
A= [46 5
5 ]
∣ ∣∣ ∣
has eigenvalues of λ1 = 10 and λ2 = −1. (Check this.) So the ratio λ2 λ1 is 0.1.
For this matrix, it only takes four iterations to obtain successive approximations that
agree when rounded to three significant digits.
x0 x1 x2 x3 x4
REMARK [1.000
1.000] [ 1.000] [ 1.000] [ 1.000] [ 1.000]
0.818 0.835 0.833 0.833
This section used the power method to approximate the dominant eigenvalue of
a matrix. This method can be modified to approximate other eigenvalues through use
of a procedure called deflation. Moreover, the power method is only one of several
techniques that can be used to approximate the eigenvalues of a matrix. Another
popular method is called the QR algorithm. This is the method used in most programs
and calculators for finding eigenvalues and eigenvectors. The QR algorithm uses the
QR-factorization of the matrix, as presented in Chapter 5. Discussions of the deflation
method and the QR algorithm are in most texts on numerical methods.
10.3 Exercises
Finding a Dominant Eigenvector In Exercises 1–4, 17. Rate of Convergence of the Power Method
use the techniques presented in Chapter 7 to find the (a) Find the eigenvalues of
eigenvalues of the matrix A. When A has a dominant
eigenvalue, find a corresponding dominant eigenvector. A=
2
1 [1
2
and B = ] 2
1 [
3
4
. ]
−1 −4 −2
1. A =
−2[ 1
]2. A =
3 −6
0
[ ] (b) Apply four iterations of the power method with
scaling to each matrix in part (a), starting with
[ ] [ ]
1 3 0 −5 0 0 x0 = [−1 2]T.
3. A = 2 1 4. A =
2 3 5 0 (c) Compute the ratios λ2λ1 for A and B. For which
1 1 0 4 −2 3 matrix do you expect faster convergence?
Using the Rayleigh Quotient In Exercises 5 and 6,
use the Rayleigh quotient to find the eigenvalue λ of the 18. C
APSTONE Rework Example 2 using the
matrix A corresponding to the eigenvector x. power method with scaling. Compare your answer
4 −5 with the one found in Example 2.
5. A = [2 −3
,x=] []5
2
Other Eigenvectors In Exercises 19 and 20, (a) find the
6. A = [ ],x=[ ]
5 0 4
6 −3 3 eigenvalues and corresponding eigenvectors of A, (b) use
the initial approximation x0 to complete two iterations
Eigenvectors and Eigenvalues In Exercises 7–10, of the power method with scaling, and (c) explain why
use the power method with scaling to approximate a the method does not seem to converge to a dominant
dominant eigenvector of the matrix A. Start with eigenvector.
x0 = [1 1]T and complete five iterations. Then use x5 to −1
approximate the dominant eigenvalue of A. 19. A = [−23 4 ]
, x0 =
1
1 []
[10 −95] [−11 0
]
[ ] []
7. A = 8. A = −3 0 2 1
6 20. A = 0 −1 0 , x0 = 1
1 −4 −3
9. A = [
8]
10. A = [ ]
6 0 1 −2 1
−2 −2 1
Other Eigenvalues In Exercises 21 and 22, observe
Eigenvectors and Eigenvalues In Exercises 11–14, that Ax = λx implies that A−1x = (1λ)x. Apply five
use the power method with scaling to approximate iterations of the power method with scaling on A−1 to
a dominant eigenvector of the matrix A. Start with approximate the eigenvalue of A with the smallest
x0 = [1 1 1]T and complete four iterations. Then use magnitude.
x4 to approximate the dominant eigenvalue of A.
[ ]
2 3 1
2 −12
[ ]
[ ] [ ]
3 0 0 1 2 0 21. A = 22. A = 0 −1 2
11. A = 1 −1 0 12. A = 0 −7 1 1 −5
0 0 3
0 2 8 0 0 0
Another Scaling Technique In Exercises 23 and
[ ] [ ]
−1 −6 0 0 6 0
24, apply four iterations of the power method with
13. A = 2 7 0 14. A = 0 −4 0
another scaling technique to approximate the dominant
1 2 −1 2 1 1
eigenvalue of the matrix: after each iteration, scale the
The Power Method with Scaling In Exercises 15 and approximation by dividing by the length so that the
16, the matrix A does not have a dominant eigenvalue. resulting approximation is a unit vector.
[ ]
Apply the power method with scaling, starting with 7 −4 2
[ ]
5 6
x0 = [1 1 1]T, and observe the results of the first four 23. A = 24. A = 16 −9 6
iterations. 4 3
8 −4 5
[ ] [ ]
1 1 0 1 2 −2
25.
Use the proof of Theorem 10.3 to show that
15. A = 3 −1 0 16. A = −2 5 −2
A(Akx0) ≈ λ1(Akx0) for large values of k. That is, show
0 0 −2 −6 6 −3
that the scale factors obtained by the power method
approach the dominant eigenvalue.
(∑ x )a = ∑ y
na0 + i 1 i
(∑ x )a + (∑ x )a = ∑ x y
i 0 i
2
1 i i
The table shows the world populations (in billions) for selected years from 1985
through 2015. (Source: U.S. Census Bureau)
Find the second-degree least squares regression polynomial for the data and use the
resulting model to predict the world populations in 2020 and 2025.
solution
Begin by letting x = 0 represent 1985, letting x = 1 represent 1990, and so on. So,
the collection of points is {(0, 4.86), (1, 5.29), (2, 5.70), (3, 6.09), (4, 6.47), (5, 6.87),
(6, 7.26)}, which yields
7 7 7
n = 7, ∑ x = 21,
i=1
i ∑x
i=1
i
2
= 91, ∑x
i=1
i
3
= 441,
7 7 7 7
∑x
i=1
i
4
= 2275, ∑ y = 42.54, ∑ x y = 138.75, ∑ x y = 619.53.
i=1
i
i=1
i i
i=1
2
i i
The system of linear equations giving the coefficients of the quadratic model
y = a0 + a1x + a2x2
is
7a0 + 21a1 + 91a2 = 42.54
21a0 + 91a1 + 441a2 = 138.75
91a0 + 441a1 + 2275a2 = 619.53.
Gaussian elimination with partial pivoting on the matrix
[ ]
7 21 91 42.54
21 91 441 138.75
91 441 2275 619.53
y and rounding to four decimal places after each intermediate calculation produces
[ ]
9 2020 1.0000 4.8462 25.0000 6.8080
8
2025 0.0000 1.0000 6.4998 0.3959 .
7 Predicted
6 points
0.0000 0.0000 1.0000 −0.0033
5 So, back-substitution produces the solution
4
3 a2 ≈ −0.0033, a1 ≈ 0.4173, a0 ≈ 4.8682
2
1 and the regression quadratic model is
x
−1 1 2 3 4 5 6 7 8 9 y = 4.8682 + 0.4173x − 0.0033x 2.
Figure 10.1 Figure 10.1 compares this model with the collecion of points. To predict the world
population in 2020, let x = 7, and obtain
y = 4.8682 + 0.4173(7) − 0.0033(72) ≈ 7.63 billion.
Similarly, the prediction for 2025 (x = 8) is
y = 4.8682 + 0.4173(8) − 0.0033(82) ≈ 8.00 billion.
[ ]
7 21 91 441 14
21 91 441 2275 52
91 441 2275 12,201 242
441 2275 12,201 67,171 1258
and rounding to four decimal places after each intermediate calculation produces
[ ]
1.0000 5.1587 27.6667 152.3152 2.8526
0.0000 1.0000 8.5322 58.3539 0.6183
0.0000 0.0000 1.0000 9.7697 0.1285
0.0000 0.0000 0.0000 1.0000 0.1670
which implies
a3 ≈ 0.1670, a2 ≈ −1.5030, a1 ≈ 3.6971, a0 ≈ −0.0731.
So, the cubic model is
y = −0.0731 + 3.6971x − 1.5030x2 + 0.1670x3.
The figure below compares this model with the points.
y
4
(6, 4)
(2, 3)
3
(3, 2)
2 (5, 2)
(1, 2)
1
(4, 1)
(0, 0)
x
1 2 3 4 5 6
An Application to Probability
The other nine probabilities can be represented using similar reasoning. So, you have
the ten equations below.
p1 = 14 (0) + 14 (0) + 14 p3 + 14 p2
p2 = 15 (0) + 15 p1 + 15 p3 + 15 p4 + 15 p5
p3 = 15 (0) + 15 p1 + 15 p2 + 15 p5 + 15 p6
p4 = 15 (0) + 15 p2 + 15 p5 + 15 p7 + 15 p8
p5 = 16 p2 + 16 p3 + 16 p4 + 16 p6 + 16 p8 + 16 p9
p6 = 15 (0) + 15 p3 + 15 p5 + 15 p9 + 15 p10
p7 = 14 (0) + 14 (1) + 14 p4 + 14 p8
p8 = 15 (1) + 15 p4 + 15 p5 + 15 p7 + 15 p9
p9 = 15 (1) + 15 p5 + 15 p6 + 15 p8 + 15 p10
p10 = 14 (0) + 14 (1) + 14 p6 + 14 p9
Rewriting these equations produces the system of ten linear equations in ten variables
below.
4p1 − p2 − p3 =0
−p1 + 5p2 − p3 − p4 − p5 =0
−p1 − p2 + 5p3 − p5 − p6 =0
− p2 + 5p4 − p5 − p7 − p8 =0
− p2 − p3 − p4 + 6p5 − p6 − p8 − p9 =0
− p3 − p5 + 5p6 − p9 − p10 =0
− p4 + 4p7 − p8 =1
− p4 − p5 − p7 + 5p8 − p9 =1
− p5 − p6 − p8 + 5p9 − p10 =1
− p6 − p9 + 4p10 =1
Node Node x
u2 u4
u1 u3
⋮ ⋮
[ (n −n1)M, M] nth age class
The age distribution vector x represents the number of population members in each age
class, where
[]
x1 Number in first age class
x2 Number in second age class
x= .
⋮ ⋮
xn Number in nth age class
Over a period of Mn years, the probability that a member of the ith age class will
survive to become a member of the (i + 1)th age class is pi, where
0 ≤ pi ≤ 1, i = 1, 2, . . . , n − 1.
The average number of offspring produced by a member of the ith age class is bi, where
0 ≤ bi, i = 1, 2, . . . , n.
These numbers can be written in matrix form as shown below.
[ ]
b1 b2 b3 . . . bn−1 bn
p1 0 0 . . . 0 0
L = 0 p2 0 . . . 0 0
⋮ ⋮ ⋮ ⋮ ⋮
0 0 0 . . . pn−1 0
Multiplying this age transition matrix L (also known as a Leslie matrix) by the age
distribution vector for a specific time period produces the age distribution vector for
the next time period. That is,
Lxj = xj+1.
In Section 7.4 you saw that the growth pattern for a population is stable when the same
percentage of the total population is in each age class each year. That is,
Lxj = xj+1 = λxj.
For populations with many age classes, the solution of this eigenvalue problem can be
found using the power method with scaling, as illustrated in Example 4.
Assume that a population of human females has the characteristics listed at the left. The
Age Class Female
table shows the age class (in years), the average number of female children born to the
(in years) Children Probability
members of each age class, and the probability of surviving to the next age class. Find
[0, 10) 0.000 0.985 a stable age distribution vector for this population.
[10, 20) 0.174 0.996 solution
[20, 30) 0.782 0.994 The age transition matrix for this population is
[30, 40) 0.263 0.990 0.000 0.174 0.782 0.263 0.022 0.000 0.000 0.000 0.000 0.000
0.985 0 0 0 0 0 0 0 0 0
[40, 50) 0.022 0.975 0 0.996 0 0 0 0 0 0 0 0
[50, 60) 0.000 0.940 0 0 0.994 0 0 0 0 0 0 0
0 0 0 0.990 0 0 0 0 0 0
[60, 70) 0.000 0.866 A= .
0 0 0 0 0.975 0 0 0 0 0
[70, 80) 0.000 0.680 0 0 0 0 0 0.940 0 0 0 0
[80, 90) 0.000 0.361 0 0 0 0 0 0 0.866 0 0 0
0 0 0 0 0 0 0 0.680 0 0
[90, 100] 0.000 0.000
0 0 0 0 0 0 0 0 0.361 0
To apply the power method with scaling to find an eigenvector for this matrix, use
an initial approximation of x0 = [1 1 1 1 1 1 1 1 1 1]T. An approximation
for an eigenvector of A, with the percentage of each age in the total population, is
shown below.
Percentage in
Eigenvector Age Class Age Class
1.000 [0, 10) 15.27
0.925 [10, 20) 14.13
0.864 [20, 30) 13.20
0.806 [30, 40) 12.31
0.749 [40, 50) 11.44
x=
0.686 [50, 60) 10.48
0.605 [60, 70) 9.24
0.492 [70, 80) 7.51
0.314 [80, 90) 4.80
0.106 [90, 100] 1.62
10.4 Exercises .
Least Squares Regression Analysis In Exercises 1–6, 14. Health Expenditures The table shows the total
find the second-degree least squares regression polynomial national health expenditures (in trillions of dollars) in the
for the data. Then graphically compare the model with the United States from 2006 through 2013. (Source: U.S.
data. Census Bureau)
1. (−2, 1), (−1, 0), (0, 0), (1, 1), (3, 2)
Year 2006 2007 2008 2009
2. (0, 4), (1, 2), (2, −1), (3, 0), (4, 1), (5, 4)
3. (−2, 1), (−1, 2), (0, 6), (1, 3), (2, 0), (3, −1) Health
Expenditures 2.167 2.304 2.414 2.506
4. (1, 1), (2, 1), (3, 0), (4, −1), (5, −4)
5. (0, 0.44), (1, 0.78), (2, 1.43), (3, 2.92), (4, 5.08)
Year 2010 2011 2012 2013
6. (−2, −3.52), (−1, 0.09), (0, 3.84), (1, 6.53), (2, 9.06)
Health
Expenditures 2.604 2.705 2.817 2.919
Least Squares Regression Analysis In Exercises
7–12, find the third-degree least squares regression
polynomial for the data. Then graphically compare the (a)
Find the second-degree least squares regression
model with the data. polynomial for the data. Let x = 6 correspond to
2006.
7. (0, 0), (1, 2), (2, 4), (3, 1), (4, 0), (5, 1)
(b) Use the model found in part (a) to predict the
8. (1, 1), (2, 4), (3, 4), (5, 1), (6, 2)
expenditures for the years 2018 through 2020.
9. (−3, 4), (−1, 1), (0, 0), (1, 2), (2, 5)
15. Population The table shows the total numbers of
10. (−7, 2), (−3, 0), (1, −1), (2, 3), (4, 6) people (in millions) in the United States 65 years of age
11. (−1, 0), (0, 3), (1, 2), (2, 0), (3, −2), (4, −3) or older from 2009 through 2013. (Source: U.S. Census
12. (0, 0), (2, 10), (4, 12), (6, 0), (8, −8) Bureau)
Year 2009 2010 2011 2012 2013
13. Decompression Sickness The table shows
recommended diving depth and time limits for Number
recreational divers to reduce the probability of of People 39.5 40.4 41.4 43.1 44.7
acquiring decompression sickness. (Source: U.S. Navy)
(a)
Find the third-degree least squares regression
Depth (in feet) 35 40 50 60 70 polynomial for the data. Let x = 9 correspond to
2009.
Recommended
Maximum Time 310 200 100 60 50 (b) Use the model found in part (a) to predict the total
(in minutes) numbers of people in the United States 65 years of
age or older in the years 2018 through 2020.
16. Food Stamp Benefits The table shows the numbers
Depth (in feet) 80 90 100 110 of households (in millions) receiving food stamp and
Recommended supplemental nutrition assistance in the United States
Maximum Time 40 30 25 20 from 2009 through 2013. (Source: U.S. Census Bureau)
(in minutes)
Year 2009 2010 2011 2012 2013
(a) Find the second-degree least squares regression Amount
Spent 11.7 13.6 14.9 15.8 15.7
polynomial for the data.
(b) Sketch the graph of the model found in part (a). (a)
Find the third-degree least squares regression
(c) Use the model found in part (a) to approximate the polynomial for the data. Let x = 9 correspond
maximum number of minutes a diver should stay at to 2009.
a depth of 120 feet. (b)
Use the model found in part (a) to predict the
(d) Compare your answer to part (c) with the numbers of households receiving such assistance in
recommended time of 15 minutes. Is the model 2018 and 2019. Do your answers seem reasonable?
found in part (a) accurate? Explain. Explain.
[] [ ]
y1 1 x1
Y=
y2
⋮
, X =
1
⋮
x2
⋮
, A = [aa ]
0
1
z° z° z°
yn 1 xn
(a) w = x = 100, y = z = 0
then the matrix equation A = (X T X)−1X T Y is equivalent to (b) w = x = 110, y = z = 10
n∑ xi yi − (∑ xi)(∑ yi) ∑ yi ∑x 26.
a1 = and a0 = − a1 i. x° x° x° x°
− (∑ xi) n n
2
n∑ xi2
1 2 3
z° z° z° z°
4 5 6
w = 80, x = 120, y = 40, z = 0
(a)
w = 70, x = 110, y = 30, z = −10
(b)
7 8 9
Food
Stable Age Distribution In Exercises 27–36, the 39. Television Watching A college dormitory houses
matrix represents the age transition matrix for a 200 students. Those who watch an hour or more of
population. Use the power method with scaling to find a television on any day always watch for less than an
stable age distribution vector. hour the next day. One-fourth of those who watch
television for less than an hour one day will watch an
[ ]
1 4
27. A = 1
hour or more the next day. Half of the students watched
2 0 television for an hour or more today. Use the power
[ ]
1 2 method to approximate a dominant eigenvector for the
28. A = 1 corresponding stochastic matrix.
4 0
40. Smokers and Nonsmokers In a population of
[ ]
1 3 10,000, there are 5000 nonsmokers, 2500 smokers of
29. A = 1
0 one pack or less per day, and 2500 smokers of more
5
than one pack per day. During any month, there is a
[ ]
1 5 5% probability that a nonsmoker will begin smoking
30. A = 1
3 0 a pack or less per day, and a 2% probability that a
nonsmoker will begin smoking more than a pack
[ ]
1 2 2
1 per day. For smokers who smoke a pack or less per
31. A = 3 0 0
1 day, there is a 10% probability of quitting and a 10%
0 3 0 probability of increasing to more than a pack per
[ ]
0 1 2 day. For smokers who smoke more than a pack per
1
32. A = 2 0 0 day, there is a 5% probability of quitting and a 10%
1 probability of dropping to a pack or less per day. Use the
0 4 0
power method to approximate a dominant eigenvector
[ ]
0 2 2
1 for the corresponding stochastic matrix.
33. A = 2 0 0
0 1 0 41. Writing In Example 2 in Section 2.5, the stochastic
matrix
[ ]
1 4 2
[ ]
34. A = 3
0 0 0.70 0.15 0.15
4
0 1
0 P = 0.20 0.80 0.15
4
0.10 0.05 0.70
[ ]
1 7 20
35. A = 0.2 0 0 represents the transition probabilities for a consumer
0 0.3 0 preference model. Use the power method to approximate
a dominant eigenvector for this matrix. How does the
[ ]
0 7 10
36. A = 0.5 0 0 approximation relate to the steady state matrix described
in the discussion following Example 3 in Section 2.5?
0 0.5 0
10 Review
Floating Point Form In Exercises 1–6, express the real Solving an ill-Conditioned System In Exercises 21 and
number in floating point form. 22, use Gaussian elimination to solve the ill-conditioned
1. 528.6 2. 475.2 system of linear equations, rounding each intermediate
calculation to three significant digits. Then compare this
3. −4.85 4. −22.5
solution with the exact solution provided.
5. 3 2 6. 4 78
1
21. x + y = −1
Finding Stored Values In Exercises 7–12, determine 999 4001
x+ y=
the stored value of the real number in a computer that (a) 1000 1000
rounds to three significant digits and (b) rounds to four x = 5000, y = −5001)
(Exact:
significant digits. 22. x−y= −1
7. 25.2 8. −41.2 99 20,101
−
x+y=
9. −250.231 10. 628.742 100 100
5
11. 12 3
12. 16 x = 20,001, y = 20,002)
(Exact:
Propagation of Rounding Error In Exercises 13 and The Jacobi Method In Exercises 23 and 24, apply the
14, evaluate the determinant of the matrix, rounding Jacobi method to the system of linear equations, using
each intermediate calculation to three significant digits. the initial approximation
Then find the exact value and compare the two results. (x1, x2, x3, . . . , xn) = (0, 0, 0, . . . , 0).
13. [20.24
12.5 2.5
6.25] 14. [10.25
8.5
]
3.2
8.1
Continue performing iterations until two successive
approximations are identical when rounded to three
significant digits.
Gaussian Elimination and Rounding Error In
Exercises 15 and 16, use Gaussian elimination to solve 23. 2x1 − x2 = −1 24. x1 + 4x2 = −3
the system. After each intermediate calculation, round x1 + 2x2 = 7 2x1 + x2 = 1
the result to three significant digits. Then find the exact
The Gauss-Seidel Method In Exercises 25 and 26,
solution and compare the two results.
apply the Gauss-Seidel method to the system of linear
15. 2.53x + 8.5y = 29.65 16. 12.5x − 18.2y = 56.8 equations in the stated exercise.
2.33x + 16.1y = 43.85 3.2x − 15.1y = 4.1
25. Exercise 23 26. Exercise 24
Gaussian Elimination with Partial Pivoting In
Strictly Diagonally Dominant Matrices In Exercises
Exercises 17–20, (a) use Gaussian elimination without
27–30, determine whether the matrix is strictly
partial pivoting to solve the system, rounding to three
diagonally dominant.
significant digits after each intermediate calculation, (b)
use Gaussian elimination with partial pivoting to solve the −2
same system, again rounding to three significant digits
27. [40
2
−3
] 28. [−11 −3]
after each intermediate calculation, and (c) compare
[ ] [ ]
4 0 2 4 2 −1
both solutions with the exact solution provided. 29. 10 12 −2 30. 0 −2 −1
17. 2.15x + 7.25y = 13.7 18. 4.25x + 6.3y = 16.85 1 −2 0 1 1 −1
3.12x + 6.3y = 15.66 6.32x + 2.14y = 10.6
(Exact: x = 3, y = 1) (Exact: x = 1, y = 2) Interchanging Rows to Obtain Convergence In
19.2.54x + 4.98y + 5.77z = 24.73 Exercises 31–34, interchange the rows of the system
of linear equations to obtain a system with a strictly
1.67x − 6.03y − 12.15z = −50.18
diagonally dominant coefficient matrix. Then apply the
−8.61x − 3.86y + 15.38z = 47.03
Gauss-Seidel method to approximate the solution to four
(Exact: x = −1, y = 2, z = 3)
significant digits.
20. −4.11x + 2.35y − 7.80z = −30.12 31. x1 + 2x2 = −5 32. x1 + 4x2 = −4
5.44x − 4.59y + 6.01z = 44.43
5x1 − x2 = 8 2x1 + x2 = 6
19.28x + 8.56y − 13.47z = −26.27
33. 2x1 + 4x2 + x3 = −2 34. x1 + 3x2 − x3 = 2
(Exact: x = 2, y = −6, z = 1)
4x1 + x2 + x3 = 1 x1 + x2 + 3x3 = −1
x1 − x2 − 4x3 = 2 3x1 + x2 + x3 = 1
Finding a Dominant Eigenvector In Exercises 35–38, 53. Hospital Care The table shows the amounts spent
use the techniques presented in Chapter 7 to find the for hospital care (in billions of dollars) in the United
eigenvalues of the matrix A. When A has a dominant States from 2009 through 2013. (Source: Centers for
eigenvalue, find a corresponding dominant eigenvector. Medicare and Medicaid Services)
35. A = [11 1
1
] 36. A = [20 ]
1
4
Year 2009 2010 2011 2012 2013
[ ] [ ]
−2 2 −3 1 2 −3 Amount
776.8 814.9 849.9 898.5 936.9
37. A = 2 1 −6 38. A = 0 5 1 Spent
−1 −2 0 0 0 4
Find the second-degree least squares regression
Using the Rayleigh Quotient In Exercises 39–44, use polynomial for the data. Let x = 9 correspond to 2009.
the Rayleigh quotient to find the eigenvalue λ of the Then use a software program or a graphing utility to find
matrix A corresponding to the eigenvector x. a second-degree least squares regression polynomial.
39. A = [21 −12
−5]
,x=[ ]
3
1
Compare the results.
54. Dormitory Costs The table shows the average costs
6 −3
40. A = [
1]
,x=[ ]
3 (in dollars) of a college dormitory room from 2009
−2 −1 through 2013. (Source: Digest of Education Statistics)
[ ] []
2 0 1 0
41. A = 0 3 4 ,x= 1 Year 2009 2010 2011 2012 2013
0 0 1 0 Cost 4446 4657 4874 5095 5296
[ ] []
1 2 −2 1
42. A = −2 5 −2 , x = 1 Find the third-degree least squares regression polynomial
−6 6 −3 3 for the data. Let x = 9 correspond to 2009. Then use
a software program or a graphing utility to find a
[ ] [ ]
0 −1 1 −1
third-degree regression polynomial. Compare the results.
43. A = 2 4 2 ,x= 5
1 1 0 1 55. Probability A researcher performs the experiment in
Example 3 of Section 10.4 with the maze shown in the
[ ] []
3 2 −3 3 figure. Find the probability that the mouse emerges in
44. A = −3 −4 9 ,x= 0 the food corridor when its begins at the ith intersection.
−1 −2 5 1
Eigenvectors and Eigenvalues In Exercises 45–48,
use the power method with scaling to approximate a
1 2 3 4
dominant eigenvector of the matrix A. Start with
x0 = [1 1]T and complete four iterations. Then use x4 to
approximate the dominant eigenvalue of A. 5 6 7 8
−3 10
45. A =
7
2 [ 2
4
] 46. A = [
5 2 ] Food
47. A =
2
0 [ 1
−4
] 48. A =
6
[
3 −3
0
] 56. Probability Rework Exercise 55 assuming that the
upper corridor is also a food corridor.
Least Squares Regression Analysis In Exercises
49–52, find the second-degree least squares regression Stable Age Distribution In Exercises 57–60, the matrix
polynomial for the data. Then use a software program or represents the age transition matrix for a population.
a graphing utility to find a second-degree least squares Use the power method with scaling to find a stable age
regression polynomial. Compare the results. distribution vector.
[ ] [ ]
49. (−2, 0), (−1, 2), (0, 3), (1, 2), (3, 0) 1 2 1 3
57. A = 1 58. A = 1
50. (−2, 2), (−1, 1), (0, −1), (1, −1), (3, 0) 2 0 3 0
51. (0, −5), (3, −4), (5, −1), (7, 3), (11, 9)
[ ]
0 2 4
[ ]
52. (0, 12), (2, 11), (4, 7), (6, 4), (8, −3) 1 5 1
59. A = 1 60. A = 4 0 0
4 0 3
0 4 0
10 Projects
1 The Successive Over-Relaxation (SOR) Method
In Section 10.2, you studied two iterative methods for solving linear systems, the
Jacobi method and the Gauss-Seidel method. A third method, known as the successive
over-relaxation (SOR) method, uses extrapolations of the Gauss-Seidel method. These
extrapolations take on the form of a weighted average of the preceding iteration X(k )
i
(k)
and the corresponding Gauss-Seidel iteration xi ,
1. Use a software program or a graphing utility to create a scatter plot of the data. Let
x = 0 correspond to 1900, x = 1 correspond to 1910, and so on.
2. Using the scatter plot, describe any patterns in the data. Do the data appear to be
linear, quadratic, or cubic in nature? Explain.
3. Use the techniques presented in this chapter to find (a) a linear least squares
regression equation, (b) a second-degree least squares regression equation, and
(c) a third-degree least squares regression equation to fit the data.
4. Graph each equation with the data. Briefly describe which of the regression
Year Population equations best fits the data.
5. Use each model to predict the populations of the United States for the years 2020,
2020 334.5 2030, 2040, and 2050. Which of the regression equations appears to be the best
2030 359.4 model for predicting future populations? Explain.
2040 380.2 6. The U.S. Census Bureau projects the populations of the United States for the years
2020, 2030, 2040, and 2050 as shown in the table at the left. Do any of your models
2050 398.3 produce the same projections? Explain any possible differences between your
projections and the U.S. Census Bureau projections.
East/Shutterstock.com