Está en la página 1de 6

Course EE317 Computation and Simulation 1

Computation and Simulation EE317


Dr Conor Brennan
Room S339
School of Electronic Engineering
Dublin City University
brennanc@eeng.dcu.ie

Section One: Numerical Differentiation


This section examines ways to numerically approximate the derivative and second
derivative of a function. Firstly, a word about notation. Let f (x) be a function who
takes the specific value f (x0 ) at the point x0 . In this handout we use the notation
f (1) (x0 ) to denote the value of the first derivative of f at the point x0 , that is
¯
(1) df ¯¯
f (x0 ) = ¯ . (1)
dx ¯x=x0

The second and third derivatives evaluated at x0 are thus denoted by f (2) (x0 ) and
f (3) (x0 ) respectively and so on. Note that in some textbooks derivatives are denoted
using a prime, that is the first derivative is denoted f 0 (x), the second derivative by
f 00 (x) and so on.

1 Forward, Backward and Central Difference


The Taylor Series relates the value of a function at a point x0 to the value of the
function at a nearby point (x0 + h), that is
h2 (2) h3
f (x0 + h) = f (x0 ) + hf (1) (x0 ) + f (x0 ) + f (3) (x0 ) + . . . . (2)
2! 3!
We can rearrange the above to get
f (x0 + h) − f (x0 ) h
= f (1) (x0 ) + f (2) (x0 ) + . . . , (3)
h 2!
and this can be used to make the forward difference approximation to f (1) (x0 ) thus
f (x0 + h) − f (x0 )
f (1) (x0 ) ' (4)
h
Obviously this is only an approximation, albeit one that is more accurate as h gets
smaller1 . How precisely does the accuracy depend on h? Let ²f denote the error
incurred using this forward difference approximation. Then
f (x0 + h) − f (x0 − h)
²f ≡ f (1) (x0 ) − (5)
h
h
= − f (2) (x0 ) + . . . (6)
2!
1
Indeed in the limit as h → 0 this is exact.
Course EE317 Computation and Simulation 2

where we have used equation (3). So this approximation is of order O (h). This
means that as h gets smaller the error of the forward different approximation also
gets smaller, in a linear fashion. Can we do better than this? Well we could exam-
ine how the function varies as we decrease, that is create the backward difference
approximation. Using the Taylor series we can write

h2 (2) h3
f (x0 − h) = f (x0 ) − hf (1) (x0 ) + f (x0 ) − f (3) (x0 ) + · · · . (7)
2 3!
Rearranging yields

f (x0 − h) − f (x0 ) h h2
= −f (1) (x0 ) + f (2) (x0 ) − f (3) (x0 ) + · · · , (8)
h 2 3!
and so we can construct the following backward difference approximation

f (x0 ) − f (x0 − h)
f (1) (x0 ) ' (9)
h
Again, this is an approximation that is more accurate as h gets smaller, but can we
quantify how the error depends on h? Inspection of the equations yields

f (x0 ) − f (x0 − h)
²b ≡ f (1) (x0 ) − (10)
h
h (2) h2 (3)
= f (x0 ) − f (x0 ) · · · (11)
2 3!
This approximation is again of order O (h). As h gets smaller the error gets smaller
in a linear fashion, and so we have not improved on our forward difference approxi-
mation. However we can combine these two approximations to get an improved one.
By taking equation (7) from equation (2) we see that
à !
h2 h3
(1)
f (x0 + h) − f (x0 − h) = f (x0 ) + hf (x0 ) + f (2) (x0 ) + f (3) (x0 ) −(12)
2 3!
à !
(1) h2 (2) h3 (3)
f (x0 ) − hf (x0 ) + f (x0 ) − f (x0 ) (13)
2 3!
à !
3
h
= 2 hf (1) (x0 ) + f (3) (x0 ) + · · · (14)
3!

Therefore
f (x0 + h) − f (x0 − h) h2
= f (1) (x0 ) + f (3) (x0 ) + · · · (15)
2h 3!
and we can therefore construct the central difference approximation

f (x0 + h) − f (x0 − h)
f (1) (x0 ) ' (16)
2h
The error for this approximation is given by

h2 (3)
²c = − f (x0 ) + · · · (17)
3!
Course EE317 Computation and Simulation 3

The central difference approximation is thus accurate to order O(h2 ) and so, for
small values of h, will in general be better than both the forward and backward
approximations.
Example Consider the polynomial
f (x) = x3 + x2 − 1.25x − 0.75 (18)
Figure (1) shows sampled values of the polynomial in the range [−2, 1.5] where we
have sampled in steps of h = 0.2. The derivative of this polynomial is given by
Sampled values of f
6

−2

−4

−6

−8
−2 −1.5 −1 −0.5 0 0.5 1 1.5

Figure 1: Sampled values of polynomial

f (1) (x) = 3x2 + 2x − 1.25 (19)


and is shown in figure (2) along with the forward, backward and central difference
approximations. We see that the central difference approximation is clearly superior
to both the forward and backward difference approximations.

1.1 Second order Derivatives


A similar procedure can be followed to derive approximations to higher order deriva-
tives. If we add equation (2) and equation (7) we get
à !
h2 h3
f (x0 + h) + f (x0 − h) = f (x0 ) + hf (x0 ) + f (2) (x0 ) + f (3) (x0 ) +
(1)
(20)
2 3!
à !
(1) h2 (2) h3 (3)
f (x0 ) − hf (x0 ) + f (x0 ) − f (x0 ) (21)
2 3!
à !
h2 (2) h4 (4)
= 2 f (x0 ) + f (x0 ) + f (x0 ) + · · · (22)
2 4!
Course EE317 Computation and Simulation 4

Exact derivative versus approximations


9

exact
backward
8 forward
central

6
First derivative

0
−2 −1.5 −1 −0.5 0 0.5 1 1.5
x

Figure 2: Exact versus approximate derivative

Therefore
f (x0 + h) − 2f (x0 ) + f (x0 − h) (2) h2 (4)
= f (x 0 ) + f (x0 ) (23)
h2 12
So we can construct the central difference approximation to the second derivative

f (x0 + h) − 2f (x0 ) + f (x0 − h)


f (2) (x0 ) ' (24)
h2
The error is of order O(h2 ) and is given by

h2 (4)
²c = − f (x0 ) + · · · (25)
12

2 Richardson’s Extrapolation
Clearly it is possible to obtain better approximations by using more and more sam-
ples of the function, thereby using smaller and smaller step sizes h. However as
h becomes smaller we run the increased risk of encountering numerical round-off
error due to the finite precision of the computer. Also when we are using finite
differences to solve partial or ordinary differential equations we would like to get
the most accurate answers with the fewest number of samples, this requires that we
keep h relatively large. Is there a way that we can reduce the error of the approxi-
mation even further without having to reduce h? Well, we have the following Taylor
Course EE317 Computation and Simulation 5

expansions

X hn (n)
f (x0 + h) = f (x0 ) (26)
n=0 n!

X hn (n)
f (x0 − h) = (−1)n f (x0 ) (27)
n=0 n!

which enable us to write the following central difference approximation.

f (x0 + h) − f (x0 − h) h2 h4
= f (1) (x0 ) + f (3) (x0 ) + f (5) (x0 ) + · · · (28)
2h 3! 5!
If we have evenly sampled function data we can also create an estimate based on
step size 2h. The Taylor series expansion is

X (2h)n (n)
f (x0 + 2h) = f (x0 ) (29)
n=0 n!

X (2h)n (n)
f (x0 − 2h) = (−1)n f (x0 ) (30)
n=0 n!

and so the central difference approximation is given by

f (x0 + 2h) − f (x0 − 2h) 4h2 (3) 16h4 (5)


= f (1) (x0 ) + f (x0 ) + f (x0 ) + · · · (31)
4h 3! 5!
Why would we do this? A central difference based on equation (31) won’t be as
accurate as one based on equation (28) as the step size between samples is greater (2h
rather than h). However we can combine the estimates to create an approximation
that is better than either of them. This process is called Richardson extrapolation.
If we multiply equation (28) by 4 and subtract (31) we get

f (x0 − 2h) − 8f (x0 − h) + 8f (x0 + h) − f (x0 + 2h) h4


= 3f (1) (x0 )−12 f (5) (x0 )+· · ·
4h 5!
(32)
and so an improved estimate is given by

f (x0 − 2h) − 8f (x0 − h) + 8f (x0 + h) − f (x0 + 2h)


f (1) (x0 ) ' (33)
12h
The error is O(h4 ) and is given by

h4 (5)
²R = 4 f (x0 ) + · · · (34)
5!
We can also use Richardson’s extrapolation to improve our estimate of a second
derivative from O(h2 ) to O(h4 ). We get

−f (x0 − 2h) + 16f (x0 − h) − 30f (x0 ) + 16f (x0 + h) − f (x0 + 2h)
f (2) (x0 ) = 2
+O(h4 )
12h
(35)
Course EE317 Computation and Simulation 6

Example: A function is given in tabular form below. Find approximations


to the 1st derivative of f (x) at x = 0.8 with error O(h),O(h2 ) and O(h4 ) respec-
tively. Find approximations to the 2nd derivative with error O(h),O(h2 ) and O(h4 )
respectively.
x 0.6 0.7 0.8 0.9 1.0
f (x) 5.9072 6.0092 6.3552 6.9992 8.0000
Here h = 0.1.
Approximation of f (1) (x) accurate to O(h) is

f (0.9) − f (0.8)
f (1) (0.8) '
0.1
6.9992 − 6.3552
=
0.1
= 6.44

Approximation of f (1) (x) accurate to O(h2 ) is

f (0.9) − f (0.7)
f (1) (0.8) '
2(0.1)
6.9992 − 6.0092
=
0.2
= 4.995

Approximation of f (1) (x) accurate to O(h4 ) is

f (0.6) − 8 ∗ f (0.7) + 8 ∗ f (0.9) − f (1.0)


f (1) (0.8) '
12(0.1)
5.9072 − 8 ∗ 6.0092 + 8 ∗ 6.9992 − 8
=
1.2
= 4.856

Approximation of f (2) (x) accurate to O(h2 ) is

f (0.9) − 2 ∗ f (0.8) + f (0.7)


f (2) (0.8) '
0.12
6.9992 − 2 ∗ 6.3552 + 6.0092
=
0.01
= 29.8

Approximation of f (2) (x) accurate to O(h4 ) is

−f (0.6) + 16 ∗ f (0.7) − 30 ∗ f (0.8) + 16 ∗ f (0.9) − f (1.0)


f (2) (0.8) '
12(0.1)2
−5.9072 + 16 ∗ 6.0092 − 30 ∗ 6.3552 + 16 ∗ 6.9992 − 8
=
0.12
= 29.76

También podría gustarte