Está en la página 1de 7

Physics 116C

Fall 2012

Applications of the Wronskian to


ordinary linear differential equations
Consider a of n continuous functions yi (x) [i = 1, 2, 3, . . . , n], each of which is
differentiable at least n times. Then if there exist a set of constants i that are not all
zero such that
i y1 (x) + 2 y2 (x) + + n yn (x) = 0 ,
(1)
then we say that the set of functions {yi (x)} are linearly dependent. If the only solution
to eq. (1) is i = 0 for all i, then the set of functions {yi (x)} are linearly independent.
The Wronskian matrix is defined as:

y1
y2

yn
y1
y2

yn


y
y

y
2
n ,
[yi (x)] = 1
..
..
..
..
.
.
.
.
(n1)

y1

(n1)

y2

(n1)

yn

where

d2 yi
d(n1) yi
dyi
(n1)
, yi
,

,
y

.
i
dx
dx2
dxn1
The Wronskian is defined to be the determinant of the Wronskian matrix,
yi

W (x) det [yi (x)] .

(2)

According to the contrapositive of eq. (8.5) on p. 133 of Boas, if {yi (x)} is a linearly
dependent set of functions then the Wronskian must vanish. However, the converse is
not necessarily true, as one can find cases in which the Wronskian vanishes without the
functions being linearly dependent. (For further details, see problem 3.816 on p. 136
of Boas.)
Nevertheless, if the yi (x) are solutions to an nth order ordinary linear differential
equation, then the converse does hold. That is, if the yi (x) are solutions to an nth
order ordinary linear differential equation and the Wronskian of the yi (x) vanishes,
then {yi (x)} is a linearly dependent set of functions. Moreover, if the Wronskian does
not vanish for some value of x, then it is does not vanish for all values of x, in which
case an arbitrary linear combination of the yi (x) constitutes the most general solution
to the nth order ordinary linear differential equation.
To demonstrate that the Wronskian either vanishes for all values of x or it is never
equal to zero, if the yi (x) are solutions to an nth order ordinary linear differential equation, we shall derive a formula for the Wronskian. Consider the differential equation,
a0 (x)y (n) + a1 (x)y (n1) + + an1 (x)y + an (x)y = 0 .
1

(3)

We are interested in solving this equation over an interval of the real axis a < x < b in
which a0 (x) 6= 0. We can rewrite eq. (3) as a first order matrix differential equation.
Defining the vector

y
y

~ = y ,
Y

...
y (n1)
It is straightforward to verify that eq. (3) is equivalent to
~
dY
~ ,
= A(x)Y
dx
where the matrix A(x) is given by

0
1
0
0

0
0

..
..

A(x) =
.
.

0
0

an (x)
an1 (x)

a0 (x)
a0 (x)

0
1
0
..
.

0
0
1
..
.

..
.

an2 (x)
a0 (x)

an3 (x)
a0 (x)

0
0
0
..
.

a1 (x)

(4)

a0 (x)

It immediately follows that if the yi (x) are linearly independent solutions to eq. (3),
then the Wronskian matrix satisfies the first order matrix differential equation,
d
= A(x) .
dx

(5)

Using eq. (23) of Appendix A, it follows that




d
1 d
= det Tr A(x) ,
det = det Tr
dx
dx
after using eq. (5) and the cyclicity property of the trace (i.e. the trace is unchanged
by cyclically permuting the matrices inside the trace). In terms of the Wronskian W
defined in eq. (2),
dW
= W Tr A(x) .
(6)
dx
This is a first order differential equation for W that is easily integrated,
Z x

W (x) = W (x0 ) exp
Tr A(t)dt .
x0

Using eq. (4), it follows that Tr A(t) = a1 (t)/a0 (t). Hence, we arrive at Liouvilles
formula (also called Abels formula),

 Z x
a1 (t)
dt .
(7)
W (x) = W (x0 ) exp
x0 a0 (t)
2

Note that if W (x0 ) 6= 0, then the result for W (x) is strictly positive or strictly negative
depending on the sign of W (x0 ). This confirms our assertion that the Wronskian either
vanishes for all values of x or it is never equal to zero.
Let us apply these results to an ordinary second order linear differential equation,
y + a(x)y + b(x)y = 0 ,

(8)

where for convenience, we have divided out by the function that originally appeared
multiplied by y . Then, eq. (7) yields the Wronskian, which we shall write in the form:
 Z x

W (x) = c exp
a(x)dx ,
(9)
Rx
where c is an arbitrary nonzero constant and
a(x)dx is the indefinite integral of
a(x).
For the case of a second order linear differential equation, there is a simpler and
more direct derivation of eq. (9). Suppose that y1 (x) and y2 (x) are linearly independent
solutions of eq. (8). Then the Wronskian is non-vanishing,


y1 y2
W = det
= y1 y2 y1 y2 6= 0 .
(10)
y1 y2
Taking the derivative of the above equation,
dW
d
=
(y1 y2 y1 y2 ) = y1 y2 y1y2 ,
dx
dx
since the terms proportional to y1 y2 exactly cancel. Using the fact that y1 and y2 are
solutions to eq. (8), we have
y1 + a(x)y1 + b(x)y1 = 0 ,
y2 + a(x)y2 + b(x)y2 = 0 .

(11)
(12)

Next, we multiply eq. (12) by y1 and multiply eq. (11) by y2 , and subtract the resulting
equations. The end result is:
y1 y2 y1 y2 + a(x) [y1 y2 y1 y2 ] = 0 .
or equivalently [cf. eq. (6)],
dW
+ a(x)W = 0 ,
(13)
dx
The solution to this first order differential equation is Abels formula given in eq. (9).
The Wronskian also appears in the following application. Suppose that one of the
two solutions of eq. (8), denoted by y1 (x) is known. We wish to determine a second
linearly independent solution of eq. (8), which we denote by y2 (x). The following
equation is an algebraic identity,
 
W
y1 y2 y2 y1
d y2
= 2,
=
2
dx y1
y1
y1
3

after using the definition of the Wronskian W given in eq. (10). Integrating with respect
to x yields
Z x
W (x) dx
y2
=
.
y1
[y1 (x)]2
Hence, it follows that
Z x
W (x) dx
y2 (x) = y1 (x)
.
(14)
[y1 (x)]2
Note that an indefinite integral always includes an arbitrary additive constant of integration. Thus, we could have written:
Z x

W (x) dx
y2 (x) = y1 (x)
+C ,
[y1 (x)]2
where C is an arbitrary constant. Of course, since y1 (x) is a solution to eq. (8), then
if y2 (x) is a solution, then so is y2 (x) + Cy1(x) for any number C. Thus, we are free to
choose any convenient value of C in defining the second linearly independent solution
of eq. (8).
Finally, we note that the Wronskian also appears in solutions to inhomogeneous
linear differential equations. For example, consider
y + a(x)y + b(x)y = f (x) ,

(15)

and assume that the solutions to the homogeneous equation [eq. (8)], denoted by y1 (x)
and y2 (x) are known. Then the general solution to eq. (15) is given by
y(x) = c1 y1 (x) + c2 y2 (x) + yp (x) ,
where yp (x), called the particular solution, is determined by the following formula,
Z
Z
y2 (x)f (x)
y1 (x)f (x)
yp (x) = y1 (x)
dx + y2 (x)
dx .
(16)
W (x)
W (x)
This result is derived using the technique of variation of parameters. Namely, one
writes
yp (x) = v1 (x)y1 (x) + v2 (x)y2 (x) ,
(17)
subject to the condition (which is chosen entirely for convenience):
v1 y1 + v2 y2 = 0 .

(18)

With this choice, it follows that


yp = v1 y1 + v2 y2 .
Differentiating once more and plugging back into eq. (15), one obtains [after using
eq. (18)]:
v1 y1 + v2 y2 = f (x) .
(19)

A second derivation of eq. (14) is given in Appendix B. This latter derivation is useful as it can
be easily generalized to the case of an nth order linear differential equation.

We now have two equations, eqs. (18) and (19), which constitute two algebraic equations for v1 and v2 . The solutions to these equations yield
v1 =

y2 (x)f (x)
,
W (x)

v2 =

y1 (x)f (x)
,
W (x)

where W (x) is the Wronskian. We now integrate to get v1 and v2 and plug back into
eq. (17) to obtain eq. (16). The derivation is complete.
Reference:
Daniel Zwillinger, Handbook of Differential Equations, 3rd Edition (Academic Press,
San Diego, CA, 1998).

APPENDIX A: Derivative of the determinant of a matrix


Recall that for any matrix A, the determinant can be computed by the cofactor
expansion. The adjugate of A, denoted by adj A is equal to the transpose of the matrix
of cofactors. In particular,
X
det A =
aij (adj A)ji ,
for any fixed i ,
(20)
j

where the aij are elements of the matrix A and (adj A)ji = (1)i+j Mij where the minor
Mij is the determinant of the matrix obtained by deleting the ith row and jth column
of A.
Suppose that the elements aij depend on a variable x. Then, by the chain rule,
X det A daij
d
det A =
.
dx
a
dx
ij
i,j

(21)

Using eq. (20), and noting that (adj A)ji does not depend on aij (since the ith row and
jth column are removed before computing the minor determinant),
det A
= (adj A)ji .
aij
Hence, eq. (21) yields Jacobis formula:


X
d
daij
dA
.
det A =
(adj A)ji
= Tr (adj A)
dx
dx
dx
i,j

Recall that if A = [aij ] and B = [bij ], then the ij matrix element of AB are given by
The trace of AB is equal to the sum of its diagonal elements, or equivalently
X
Tr(AB) =
ajk bkj .
jk

(22)
P

aik bkj .

If A is invertible, then we can use the formula


A1 det A = Adj A ,
to rewrite eq. (22) as


d
1 dA
,
det A = det A Tr A
dx
dx

(23)

which is the desired result.


Reference:
M.A. Goldberg, The derivative of a determinant, The American Mathematical Monthly,
Vol. 79, No. 10 (Dec. 1972) pp. 11241126.

APPENDIX B: Another derivation of eq. (14)


Given a second order linear differential equation
y + a(x)y + b(x)y = 0 ,

(24)

with a known solution y1 (x), then one can derive a second linearly independent solution
y2 (x) by the method of variations of parameters. In this context, the idea of this
method is to define a new variable v,
Z
y2 (x) = v(x)y1 (x) = y1 (x) w(x)dx ,
(25)
where
v w .

(26)

Then, we have
y2 = vy1 + wy1 ,

y2 = vy1 + w y1 + 2wy1 .

Since y2 is a solution to eq. (24), it follows that


w y1 + w[2y1 + a(x)y1 ] + v[y1 + a(x)y1 + b(x)y1 ] = 0 ,
Using the fact that y1 is a solution to eq. (24), the coefficient of v vanishes and we are
left with a first order differential equation for w
w y1 + w[2y1 + a(x)y1 ] = 0 .

Note that Tr (cB) = c Tr B for any number c and matrix B. In deriving eq. (23), c = det A.
This method is easily extended to the case of an nth order linear differential equation. In particular, if a non-trivial solution to eq. (3) is known, then this solution can be employed to reduce the order
of the differential equation by 1. This procedure is called reduction of order. For further details, see
pp. 352354 of Daniel Zwillinger, Handbook of Differential Equations, 3rd Edition (Academic Press,
San Diego, CA, 1998).

After dividing this equation by y1 , we see that the solution to the resulting equation is
 Z 
 
2y1(x)
w(x) = c exp
+ a(x) dx
y1 (x)
 Z

2 ln y1 (x)
= ce
exp a(x)dx
 Z

W (x)
c
exp a(x)dx =
,
=
2
[y1 (x)]
[y1 (x)]2

(27)

after using eq. (9) for the Wronskian. The second solution to eq. (24) defined by
eq. (25) is then given by
Z
W (x)
y2 (x) = y1 (x)
dx ,
[y1 (x)]2
after employing eq. (27).

También podría gustarte