Está en la página 1de 49

Calculo Numérico I

Sistemas de Ecuaciones
Lineales

Dr. Luis Sanchez


Objectives:

• Knowing how to solve small sets of linear equations with


the graphical method and Cramer’s rule.
• Understanding how to implement forward elimination
and back substitution as in Gauss elimination.
• Understanding the concepts of singularity and ill-
condition.
• Understanding how pivoting is implemented.
Systems of Linear Equations
A systemof linear equations can be presented
in different forms

2 x1  4 x2  3x3  3  2 4  3  x1  3
      
2.5 x1  x2  3x3  5   2.5  1 3   x2   5
x1  6 x3  7   1 0  6  x3  7
Standard form M atrix form

[A]{x}  {b}
Review of Matrices
 a11 a12  a1m 
a a 22  a 2m  2nd row Elements are indicated by a i j
[ A]  
21

     
 
an1 a n2  anm  n  m row column

mth column

Row vector: Column vector:


 c1 
[R ]  r1 r2  rn  1 n c 
[C]   
2

  
Square matrix:  
c m  m  1
- [A]nxm is a square matrix if n=m.
- A system of n equations with n unknonws has a square coefficient
matrix.
Special Types of Square Matrices
5 1 2 16  a11  1 
1 3 7 39     1 
[ A]   [ D]  
a22  [I ]   
2 7 9 6     1 
     
16 39 6 88  ann   1

Symmetric Diagonal Identity

𝑎11 𝑎12 ⋯ 𝑎1𝑛 a11 


𝑎22 ⋯ 𝑎2𝑛  
[𝐴] = a a
⋱ ⋮ [ A]   21 22 
𝑎𝑛𝑛   
 
a n1  ann 

Upper Triangular Lower Triangular


Review of Matrices

• Augmented matrix: is a special way of showing two


matrices together.
a11 a12 
A
For example
a 21 a 22  augmented with the

a11 b1 
column vector B   1 
b a12
is 
b 2  a 21 a 22 b2 

• Determinant of a matrix:
A single number. Determinant of [A] is shown as |A|.
Methods to solve Systems of Linear
Equations

1. Graphical Method.
2. Cramer’s Rule. For n ≤ 3
3. Method of Elimination.
4. Gauss Elimination.
5. Gauss-Jordan Elimination.
1. Graphical Method
• For small sets of simultaneous equations, graphing
them and determining the location of the intercept
provides a solution.
• Solve:

3x1  2 x2  18
 x1  2 x2  2
• Plot x2 vs. x1, the
intersection of the lines
presents the solution.

For n = 3, each equation will be a plane on a 3D coordinate system. Solution


is the point where these planes intersect.
For n > 3, graphical solution is not practical.
1. Graphical Method (cont)
• Graphing the equations can also show systems where:
a) No solution exists
b) Infinite solutions exist
c) System is ill-conditioned (Mal Condicionado)

(sensitive to round-off errors)


1. Graphical Method
Mathematically
• Coefficient matrices of (a) & (b) are singular. There
is no unique solution for these systems.
Determinants of the coefficient matrices are zero
and these matrices can not be inverted.

• Coefficient matrix of (c) is almost singular. Its


inverse is difficult to take. This system has a unique
solution, which is not easy to determine numerically
because of its extreme sensitivity to round-off errors.
2. Cramer’s Rule
 For a set of three equations: A.x  B
Where [A] is the coefficient matrix: a11 a12 a13 
A  a21 a22 a23 

a31 a32 a33 

• Cramer’s Rule states that each unknown in a


system of linear algebraic equations may be
expressed as a fraction of two determinants with
denominator D and with the numerator obtained
from D by replacing the column of coefficients
of the unknown in question by the constants b1,
b2, …, bn.
2. Cramer’s Rule
a11 a12 a13 b1 a12 a13
D  a21 a22 a23 b2 a22 a23
a31 a32 a33 b3 a32 a33
x1 
D
a22 a23
D11   a22 a33  a32 a23 a11 b1 a13
a32 a33
a21 b2 a23
a21 a23
D12   a21 a33  a31 a23 a31 b3 a33
a31 a33 x2 
D
a21 a22
D13   a21 a32  a31 a22 a11 a12 b1
a31 a32
a21 a22 b2
a22 a23 a21 a23 a21 a22 a31 a32 b3
D  a11  a12  a13 x3 
a32 a33 a31 a33 a31 a32 D
Cramer’s Rule Example
• Find x2 in the following system of equations:
0.3x1  0.52x 2  x 3  0.01
0.5x1  x 2  1.9x 3  0.67
0.1x1  0.3x 2  0.5x 3  0.44
• Find the determinant D
0.3 0.52 1
1 1.9 0.5 1.9 0.5 1
D  0.5
 1 1.9  0.3  0.52 1  0.0022
0.3 0.5 0.1 0.5 0.1 0.4
0.1 0.3 0.5

• Find determinant D2 by replacing D’s second column with b


0.3 0.01 1
 0.67 1.9 0.5 1.9 0.5 0.67
D2  0.5 0.67 1.9  0.3  0.01 1  0.0649
0.44 0.5 0.1 0.5 0.1 0.44
0.1 0.44 0.5
• Divide
D2 0.0649
x2    29.5
 D 0.0022
2. Cramer’s Rule

• For a singular system D = 0  Solution can not be


obtained.

• For large systems Cramer’s rule is not practical because


calculating determinants is costly.
• To solve N by system requires (N + 1)(N−1)N!
multiplications.
• To solve a 30 by 30 system, 2.38*10^35 multiplications
are needed"
3. Method of Elimination

• The basic strategy is to successively solve one of the


equations of the set for one of the unknowns and to
eliminate that variable from the remaining equations
by substitution.

• The elimination of unknowns can be extended to


systems with more than two or three equations.
However, the method becomes extremely tedious to
solve by hand.
3. Elimination of Unknowns Method
2.5x1 + 6.2x2 = 3.0
Given a 2x2 set of equations:
4.8x1 - 8.6x2 = 5.5

• Multiply the 1st eqn by 8.6 21.50x1 + 53.32x2=25.8


and the 2nd eqn by 6.2  29.76x1 – 53.32x2=34.1
• Add these equations  51.26 x1 + 0 x2 = 59.9

• Solve for x1 : x1 = 59.9/51.26 = 1.168552478

• Using the 1st eqn solve for x2 :


x2 =(3.0–2.5*1.168552478)/6.2 = 0.01268045242

• Check if these satisfy the 2nd eqn:


4.8*1.168552478–8.6*0.01268045242 = 5.500000004

(Difference is due to the round-off errors).


4. Naïve Gauss Elimination
• For larger systems, Cramer’s Rule or Elimination
method can become unwieldy.
• Instead, a sequential process of removing
unknowns from equations using forward
elimination followed by back substitution may be
used - this is Gauss elimination.
• “Naïve” Gauss elimination simply means the
process does not check for potential problems
resulting from division by zero.
4. Naive Gauss Elimination Method
• Consider the following system of n equations.

a11x1 + a12x2 + ... + a1nxn = b1 (1)


a21x1 + a22x2 + ... + a2nxn = b2 (2)
...
an1x1 + an2x2 + ... + annxn = bn (n)

Form the augmented matrix of [A|B].


Step 1 : Forward Elimination: Reduce the system to an upper triangular
system.

1.1- First eliminate x1 from 2nd to nth equations.


- Multiply the 1st eqn. by a21/a11 & subtract it from the 2nd equation.
This is the new 2nd eqn.
- Multiply the 1st eqn. by a31/a11 & subtract it from the 3rd equation.
This is the new 3rd eqn.
...
- Multiply the 1st eqn. by an1/a11 & subtract it from the nth equation.
This is the new nth eqn.
4. Naive Gauss Elimination Method
Note:
- In these steps the 1st eqn is the pivot equation and a11 is the
pivot element.
- Note that a division by zero may occur if the pivot element is
zero. Naive-Gauss Elimination does not check for this.

a11 a12 a13  a1n   x 1  b1 


The modified system is  0 a22 a23  a2n   x 2  b2 
    
0 a32 a33  a3n   x 3   b3 
 
indicates that the
           
   
system is modified once.  0 an2 an3  ann   x n  bn 
4. Naive Gauss Elimination Method (cont’d)

1.2- Now eliminate x2 from 3rd to nth equations.

a11 a12 a13  a1n   x 1  b1 


 0 a22 a23  a2n    b 
The modified system is    x 2  2
   
 0 0 a33
  a3n   x 3   b3 
    
       
   
 0 0 an3   
 ann xn   bn 
 

Repeat steps (1.1) and (1.2) upto (1.n-1).


a11 a12 a13 a1n   x 1   b1 
we will get this upper 0 
a22 
a23 a2 n   x   b 
triangular system   
 
2
 

2 

0 0 
a33 a3n   x 3    b3 
     
     
 0 0 0 0 ( n 1) 
ann   x n   ( n 1)
b n  
4. Naive Gauss Elimination Method (cont’d)

Step 2 : Back substitution


Find the unknowns starting from the last equation.
1. Last equation involves only xn. Solve for it. ( n 1)
b
xn  n
( n 1)
a nn
2. Use this xn in the (n-1)th equation and solve for xn-1.
...
3. Use all previously calculated x values in the 1st eqn
and solve for x1.
n
bi(i 1)   ij x j
a i 1

j i 1
xi  ( i 1)
for i  n-1, n-2, ..., 1
a ii
Summary of Naive Gauss Elimination Method
Naive Gauss Elimination Method
Example 1

Solve the following system using Naive Gauss Elimination.

6x1 – 2x2 + 2x3 + 4x4 = 16


12x1 – 8x2 + 6x3 + 10x4 = 26
3x1 – 13x2 + 9x3 + 3x4 = -19
-6x1 + 4x2 + x3 - 18x4 = -34

Step 0: Form the augmented matrix

6 –2 2 4 | 16
12 –8 6 10 | 26 R2-2R1
3 –13 9 3 | -19 R3-0.5R1
-6 4 1 -18 | -34 R4-(-R1)
Naive Gauss Elimination Method
Example 1 (cont’d)
Step 1: Forward elimination

1. Eliminate x1 6 –2 2 4 | 16 (Does not change. Pivot is 6)


0 –4 2 2 | -6
0 –12 8 1 | -27 R3-3R2
0 2 3 -14 | -18 R4-(-0.5R2)

2. Eliminate x2 6 –2 2 4 | 16 (Does not change.)


0 –4 2 2 | -6 (Does not change. Pivot is-4)
0 0 2 -5 | -9
0 0 4 -13 | -21 R4-2R3

3. Eliminate x3 6 –2 2 4| 16 (Does not change.)


0 –4 2 2| -6 (Does not change.)
0 0 2 -5 | -9 (Does not change. Pivot is 2)
0 0 0 -3 | -3
Naive Gauss Elimination Method
Example 1 (cont’d)
Step 2: Back substitution

Find x4 x4 =(-3)/(-3) = 1

Find x3 x3 =(-9+5*1)/2 = -2

Find x2 x2 =(-6-2*(-2)-2*1)/(-4) = 1

Find x1 x1 =(16+2*1-2*(-2)-4*1)/6 = 3
Naive Gauss Elimination Method Example 2
(Using 6 Significant Figures)
3.0 x1 - 0.1 x2 - 0.2 x3 = 7.85
0.1 x1 + 7.0 x2 - 0.3 x3 = -19.3 R2-(0.1/3)R1
0.3 x1 - 0.2 x2 + 10.0 x3 = 71.4 R3-(0.3/3)R1

Step 1: Forward elimination

3.00000 x1- 0.100000 x2 - 0.200000 x3 = 7.85000


7.00333 x2 - 0.293333 x3 = -19.5617
- 0.190000 x2 + 10.0200 x3 = 70.6150

3.00000 x1- 0.100000 x2 - 0.20000 x3 = 7.85000


7.00333 x2 - 0.293333 x3 = -19.5617
10.0120 x3 = 70.0843
Naive Gauss Elimination Method Example 2
(cont’d)
Step 2: Back substitution

x3 = 7.00003
x2 = -2.50000
x1 = 3.00000

Exact solution:
x3 = 7.0
x2 = -2.5
x1 = 3.0
Pseudo-code of Naive Gauss Elimination Method

(a) Forward Elimination (b) Back substitution

k,j
function x = GaussNaive(A,b)
% GaussNaive: naive Gauss elimination
% x = GaussNaive(A,b): Gauss elimination without pivoting.
% input:
% A = coefficient matrix
% b = right hand side vector
% output:
% x = solution vector
[m,n] = size(A);
if m~=n
error('Matrix A must be square');
end
nb = n+1;
Aug = [A b]; % back substitution
% forward elimination x = zeros(n,1);
for k = 1:n-1 x(n) = Aug(n,nb)/Aug(n,n);
for i = k+1:n for i = n-1:-1:1
factor = Aug(i,k)/Aug(k,k); x(i) = (Aug(i,nb)-
Aug(i,k:nb) = Aug(i,k:nb)-factor*Aug(k,k:nb); Aug(i,i+1:n)*x(i+1:n))/Aug(i,i);
end end
end end
Pitfalls of Gauss Elimination Methods
1. Division by zero

2 x2 + 3 x3 = 8
a11 = 0
4 x1 + 6 x2 + 7 x3 = -3 (the pivot element)
2 x1 + x2 + 6 x3 = 5

It is possible that during both elimination and back-


substitution phases a division by zero can occur.

2. Round-off errors
In the previous example where up to 6 digits were kept during
the calculations and still we end up with close to the real
solution.
x3 = 7.00003, instead of x3 = 7.0
Pitfalls of Gauss Elimination (cont’d)
3. Ill-conditioned systems

x1 + 2x2 = 10
 x1 = 4.0 & x2 = 3.0
1.1x1 + 2x2 = 10.4

x1 + 2x2 = 10
 x1 = 8.0 & x2 = 1.0
1.05x1 + 2x2 = 10.4

Ill conditioned systems are those where small changes in


coefficients result in large change in solution. Alternatively, it
happens when two or more equations are nearly identical, resulting a
wide ranges of answers to approximately satisfy the equations. Since
round off errors can induce small changes in the coefficients, these
changes can lead to large solution errors.
Pitfalls of Gauss Elimination (cont’d)
4. Singular systems.
• When two equations are identical, we would loose one degree
of freedom and be dealing with case of n-1 equations for n
unknowns.

To check for singularity:


• After getting the forward elimination process and getting the
triangle system, then the determinant for such a system is the
product of all the diagonal elements. If a zero diagonal
element is created, the determinant is Zero then we have a
singular system.
• The determinant of a singular system is zero.
Techniques for Improving Solutions
1. Use of more significant figures to solve for the round-off
error.
2. Pivoting. If a pivot element is zero, elimination step leads to
division by zero. The same problem may arise, when the
pivot element is close to zero. This Problem can be avoided
by:
 Partial pivoting. Switching the rows so that the largest
element is the pivot element.
 Complete pivoting. Searching for the largest element in all
rows and columns then switching.
3. Scaling
 Solve problem of ill-conditioned system.
 Minimize round-off error
Partial Pivoting

Before each row is normalized, find the largest


available coefficient in the column below the pivot
element. The rows can then be switched so that the
largest element is the pivot element so that the
largest coefficient is used as a pivot.
Example: Gauss Elimination with partial
pivot
2x 2 x 4 0
2x 1 2x 2 3x 3 2x 4  2
4x 1 3x 2 x 4  7
6x 1 x 2 6x 3 5x 4 6

a) Forward Elimination

0 2 0 1 0  6 1 6 5 6
   
2 2 3 2 2  2 2 3 2 2 

R 4
R 1 

 4 3 0 1 7   4 3 0 1 7 
   
 6 1 6 5 6   0 2 0 1 0 
Example: Gauss Elimination (cont’d)
 6 1 6 5 6
 
2 2 3 2 2  R 2  0.33333  R 1
 4 3 0 1 7  R 3  0.66667  R 1
 
 0 2 0 1 0 
6 1 6 5 6 
 
0 1.6667 5 3.6667 4 

R3
R 2 

0 3.6667 4 4.3333 11
 
0 2 0 1 0 
6 1 6 5 6 
 
0 3.6667 4 4.3333 11
0 1.6667 5 3.6667 4 
 
0 2 0 1 0 
Example: Gauss Elimination (cont’d)
6 1 6 5 6 
 
0 3.6667 4 4.3333 11
0 1.6667 5 3.6667 4  R 3  0.45455  R 2
 
0 2 0 1 0  R 4  0.54545  R 2

6 1 6 5 6 
 
0 3.6667 4 4.3333 11 
0 0 6.8182 5.6364 9.0001
 
0 0 2.1818 3.3636 5.9999  R 4  0.32000  R 3

6 1 6 5 6 
 
0 3.6667 4 4.3333 11 
0 0 6.8182 5.6364 9.0001
 
0 0 0 1.5600 3.1199 
Example: Gauss Elimination (cont’d)

6 1 6 5 6 
 
0 3.6667 4 4.3333 11 
0 0 6.8182 5.6364 9.0001
 
0 0 0 1.5600 3.1199 

b) Back Substitution
3.1199
x4   1.9999
1.5600
9.0001  5.6364  1.9999 
x3   0.33325
6.8182
11  4.3333  1.9999   4  0.33325 
x2   1.0000
3.6667
6  5  1.9999   6  0.33325   11.0000 
x1   0.50000
6
Use of more significant figures to solve for the
round-off error : Example.

Use Gauss Elimination to solve these 2 equations: (keeping


only 4 sig. figures)

0.0003 x1 + 3.0000 x2 = 2.0001


1.0000 x1 + 1.0000 x2 = 1.000

0.0003 x1 + 3.0000 x2 = 2.0001


- 9999.0 x2 = -6666.0

Solve: x2 = 0.6667 & x1 = 0.0

The exact solution is x2 = 2/3 & x1 = 1/3


Use of more significant figures to solve for the round-off
error :Example (cont’d).

2 2.0001  3(2 / 3)
x2  x1 
3 0.0003

Significant
x2 x1
Figures
3 0.667 -3.33
4 0.6667 0.000
5 0.66667 0.3000
6 0.666667 0.33000
7 0.6666667 0.333000
Pivoting: Example to solve for the round-off error
Now, solving the pervious example using the partial pivoting technique:

Original
0.0003 x1 + 3.0000 x2 = 2.0001
1.0000 x1 + 1.0000 x2 = 1.000

The pivot is 1.0

1.0000 x1+ 1.0000 x2 = 1.000


0.0003 x1+ 3.0000 x2 = 2.0001
1.0000 x1+ 1.0000 x2 = 1.000
2.9997 x2 = 1.9998

x2 = 0.6667 & x1=0.3333


Scaling: Example
• Solve the following equations using naïve gauss elimination:
(keeping only 3 sig. figures)
2 x1+ 100,000 x2 = 100,000
x1 + x2 = 2.0
• Forward elimination:
2 x1+ 100.000 x2 = 100.000
- 50,0000 x2 = -50,0000
Solve x2 = 1.00 & x1 = 0.00

• The exact solution is x1 = 1.00002 & x2 = 0.99998


Scaling: Example (cont’d)
B) Using the scaling algorithm to solve:
2 x1+ 100,000 x2 = 100,000
x1 + x2 = 2.0
Scaling the first equation by dividing by 100,000:
0.00002 x1+ x2 = 1.0
x1+ x2 = 2.0
Rows are pivoted:
x1 + x2 = 2.0
0.00002 x1+ x2 = 1.0
Forward elimination yield:
x1 + x2 = 2.0
x2 = 1.00
Solve: x2 = 1.00 & x1 = 1.00
The exact solution is x1 = 1.00002 & x2 = 0.99998
Scaling: Example (cont’d)
C) The scaled coefficient indicate that pivoting is necessary.
We therefore pivot but retain the original coefficient to give:
x1 + x2 = 2.0
2 x1+ 100,000 x2 = 100,000

Forward elimination yields:

x1 + x2 = 2.0
100,000 x2 = 100,000
Solve: x2 = 1.00 & x1 = 1.00

Thus, scaling was useful in determining whether pivoting was


necessary, but the equation themselves did not require scaling
to arrive at a correct result.
5. Gauss-Jordan Elimination

• It is a variation of Gauss elimination. The major


differences are:

• When an unknown is eliminated, it is eliminated


from all other equations rather than just the
subsequent ones.
• All rows are normalized by dividing them by their
pivot elements.
• Elimination step results in an identity matrix.
• It is not necessary to employ back substitution to
obtain solution.
Gauss-Jordan Elimination- Example

0 2 0 1 0 1 0.16667 1 0.83335 1
  
R 4  
2 2 3 2 2  R 1 
 2 2 3 2 2 
 4 3 0 1 7  R 4 / 6.0 4 3 0 1 7 
   
 6 1 6 5 6  0 2 0 1 0 

1 0.16667 1 0.83335 1
 
2 2 3 2 2  R 2  2  R 1
4 3 0 1 7  R 3  4  R 1
 
 0 2 0 1 0 
1 0.16667 1 0.83335 1
 
0 1.6667 5 3.6667 2 
0 3.6667 4 4.3334 7 
 
0 2 0 1 0 

Dividing the 2nd row by 1.6667 and reducing the second column. (operating
above the diagonal as well as below) gives:

1 0 1.5 1.2000 1.4000 


 
0 1 2.9999 2.2000 2.4000
0 0 15.000 12.400 19.800
 
0 0 5.9998 3.4000 4.8000 

Divide the 3rd row by 15.000 and make the elements in the 3rd Column
zero.
1 0 0 0.04000 0.58000 
 
0 1 0 0.27993 1.5599 
0 0 1 0.82667 1.3200 
 
0 0 0 1.5599 3.1197 

Divide the 4th row by 1.5599 and create zero above the diagonal in the fourth column.

1 0 0 0 0.49999 
 
0 1 0 0 1.0001 
0 0 1 0 0.33326 
 
0 0 0 1 1.9999 

Note: Gauss-Jordan method requires almost 50 % more operations than Gauss


elimination; therefore it is not recommended to use it.
Solving With MATLAB

• MATLAB provides two direct ways to solve


systems of linear algebraic equations [A]{x}={b}:
• Left-division
x = A\b
• Matrix inversion
x = inv(A)*b
• The matrix inverse is less efficient than left-division
and also only works for square, non-singular
systems.

También podría gustarte