Documentos de Académico
Documentos de Profesional
Documentos de Cultura
The second and third derivatives evaluated at x0 are thus denoted by f (2) (x0 ) and
f (3) (x0 ) respectively and so on. Note that in some textbooks derivatives are denoted
using a prime, that is the first derivative is denoted f 0 (x), the second derivative by
f 00 (x) and so on.
where we have used equation (3). So this approximation is of order O (h). This
means that as h gets smaller the error of the forward different approximation also
gets smaller, in a linear fashion. Can we do better than this? Well we could exam-
ine how the function varies as we decrease, that is create the backward difference
approximation. Using the Taylor series we can write
h2 (2) h3
f (x0 − h) = f (x0 ) − hf (1) (x0 ) + f (x0 ) − f (3) (x0 ) + · · · . (7)
2 3!
Rearranging yields
f (x0 − h) − f (x0 ) h h2
= −f (1) (x0 ) + f (2) (x0 ) − f (3) (x0 ) + · · · , (8)
h 2 3!
and so we can construct the following backward difference approximation
f (x0 ) − f (x0 − h)
f (1) (x0 ) ' (9)
h
Again, this is an approximation that is more accurate as h gets smaller, but can we
quantify how the error depends on h? Inspection of the equations yields
f (x0 ) − f (x0 − h)
²b ≡ f (1) (x0 ) − (10)
h
h (2) h2 (3)
= f (x0 ) − f (x0 ) · · · (11)
2 3!
This approximation is again of order O (h). As h gets smaller the error gets smaller
in a linear fashion, and so we have not improved on our forward difference approxi-
mation. However we can combine these two approximations to get an improved one.
By taking equation (7) from equation (2) we see that
à !
h2 h3
(1)
f (x0 + h) − f (x0 − h) = f (x0 ) + hf (x0 ) + f (2) (x0 ) + f (3) (x0 ) −(12)
2 3!
à !
(1) h2 (2) h3 (3)
f (x0 ) − hf (x0 ) + f (x0 ) − f (x0 ) (13)
2 3!
à !
3
h
= 2 hf (1) (x0 ) + f (3) (x0 ) + · · · (14)
3!
Therefore
f (x0 + h) − f (x0 − h) h2
= f (1) (x0 ) + f (3) (x0 ) + · · · (15)
2h 3!
and we can therefore construct the central difference approximation
f (x0 + h) − f (x0 − h)
f (1) (x0 ) ' (16)
2h
The error for this approximation is given by
h2 (3)
²c = − f (x0 ) + · · · (17)
3!
Course EE317 Computation and Simulation 3
The central difference approximation is thus accurate to order O(h2 ) and so, for
small values of h, will in general be better than both the forward and backward
approximations.
Example Consider the polynomial
f (x) = x3 + x2 − 1.25x − 0.75 (18)
Figure (1) shows sampled values of the polynomial in the range [−2, 1.5] where we
have sampled in steps of h = 0.2. The derivative of this polynomial is given by
Sampled values of f
6
−2
−4
−6
−8
−2 −1.5 −1 −0.5 0 0.5 1 1.5
exact
backward
8 forward
central
6
First derivative
0
−2 −1.5 −1 −0.5 0 0.5 1 1.5
x
Therefore
f (x0 + h) − 2f (x0 ) + f (x0 − h) (2) h2 (4)
= f (x 0 ) + f (x0 ) (23)
h2 12
So we can construct the central difference approximation to the second derivative
h2 (4)
²c = − f (x0 ) + · · · (25)
12
2 Richardson’s Extrapolation
Clearly it is possible to obtain better approximations by using more and more sam-
ples of the function, thereby using smaller and smaller step sizes h. However as
h becomes smaller we run the increased risk of encountering numerical round-off
error due to the finite precision of the computer. Also when we are using finite
differences to solve partial or ordinary differential equations we would like to get
the most accurate answers with the fewest number of samples, this requires that we
keep h relatively large. Is there a way that we can reduce the error of the approxi-
mation even further without having to reduce h? Well, we have the following Taylor
Course EE317 Computation and Simulation 5
expansions
∞
X hn (n)
f (x0 + h) = f (x0 ) (26)
n=0 n!
∞
X hn (n)
f (x0 − h) = (−1)n f (x0 ) (27)
n=0 n!
f (x0 + h) − f (x0 − h) h2 h4
= f (1) (x0 ) + f (3) (x0 ) + f (5) (x0 ) + · · · (28)
2h 3! 5!
If we have evenly sampled function data we can also create an estimate based on
step size 2h. The Taylor series expansion is
∞
X (2h)n (n)
f (x0 + 2h) = f (x0 ) (29)
n=0 n!
∞
X (2h)n (n)
f (x0 − 2h) = (−1)n f (x0 ) (30)
n=0 n!
h4 (5)
²R = 4 f (x0 ) + · · · (34)
5!
We can also use Richardson’s extrapolation to improve our estimate of a second
derivative from O(h2 ) to O(h4 ). We get
−f (x0 − 2h) + 16f (x0 − h) − 30f (x0 ) + 16f (x0 + h) − f (x0 + 2h)
f (2) (x0 ) = 2
+O(h4 )
12h
(35)
Course EE317 Computation and Simulation 6
f (0.9) − f (0.8)
f (1) (0.8) '
0.1
6.9992 − 6.3552
=
0.1
= 6.44
f (0.9) − f (0.7)
f (1) (0.8) '
2(0.1)
6.9992 − 6.0092
=
0.2
= 4.995