Documentos de Académico
Documentos de Profesional
Documentos de Cultura
LECTURE NOTES
Nezih Guner
Department of Economics
Pennsylvania State University
March 2006
i=1
i=1
Individuals have preferences over these goods and will trade with each other to
maximize their well being. We will assume that:
1. Consumers preferences are representable by a utility function ui : X
m
! R; where X is the consumption set.
R+
2. ui 2 C 2 ; i.e. ui is such that it is continuos and the rst and the second
derivatives exist. Here C 2 represents the set of such functions.
3. Dui (x)
0 for all x 2 X; i.e. preferences are strictly monotonic. [A
weaker assumption would be monotonicity, i.e.Dui (x) > 0 for all x 2 X]:
4. ui is strictly concave, i.e. ui ( x + (1
)y) > ui (x) + (1
)ui (y)
for all x; y 2 X, x 6= y; and 2 (0; 1): [A weaker assumption would be
strict quasi concavity, i.e. ui ( x + (1
)y) > ui (y) for all x; y 2 X,
ui (x) ui (y); 2 (0; 1)].
m
5. wi 2 R++
; i = 1; :::; n; i.e. every agent is endowed with a positive amount
of each good.
1 There are several books that cover the materials presented here in a much more detail.
See, for example, Varian (1992), Farmer (1993), and Mas-Collel, Whinston and Green (1995).
General framework that is used here and in subsequent chapters was developed in Arrow
(1951), Debreu (1951), Arrow and Debreu (1954), and McKenzie (1954).
y
x + (1-)y
=
>
y
y
y
y
means
means
means
means
xi = yi for all
xi yi for all
xi yi for all
xi > yi for all
i;
i;
i and x 6= y
i:
Hence,
n
R+
= fx 2 Rn jx
n
0g and R++
= fx 2 Rn jx
0g :
@ui (x)
@x1i
@ui (x)
@x2i
::::
@ui (x)
:
@xni
kxk =
i=1
x2i
!1=2
subject to
p(xi
wi )
0; and xi 2 X:
(2)
Here xi is the consumption bundle that the consumer is choosing given a price
vector p and consumers endowment vector wi .
Hence, each individual tries to nd the best possible consumption bundle
xi = x1i ; x2i ; :::; xm
and is constrained by the value of his/her available rei
sources. The budget constraint can be written more explicitly as
pxi = p1 x1i + p2 x2i + ::: + pm xm
i
Remark 2 Note that since p and xi are vectors, we should write p0 xi (with p0
representing the transpose of p) to represent an inner product: Here I adopt a
simpler notation, and do not di erentiate between row and column vectors. It
is obvious that we mean an inner product of two vectors when we write pxi :
Given our assumptions, there is a unique interior solution to this problem.
This solution consist of m equations for each consumer:
x1i
x2i
xm
i
Consumer chooses x1 given initial endowments. All we care, however, is functions fij representing excess demand or excess supply of each good for each
consumer. For each consumer then we will represent the optimal decisions as
xi = wi + fi (p);
where we drop wi from fi as an argument, since it is given, and its value is
known to each consumer.
4
Note that rst order conditions are necessary and su cient to characterize
xi (since we are maximizing a strictly concave function on a convex set, solution
exists and it is unique): Hence, xi is the solution to consumer i0 s problem if and
only if it satises
Dui (xi ) = i p;
(3)
and
pxi = pwi :
(4)
Here i is the Lagrange multiplier associated with the consumers budget constraint. Note that Dui (xi ) = i p is a set of m equations for each i, since there
is one derivative and one price for each good. Figure 2 represents the optimal
choice of a consumer for a two-good case.
Excess
demand
for good 2
x2
w2
x1
w1
4. It is homogenous of degree 0:
fi ( p) = fi (p); for all
> 0:
This implies that only the relative prices matter. Hence, if we multiply
all the prices with a constant the optimal choice does not change.
5. If pn ! p; where some pj = 0; then kfi (pn )k ! 1: This implies that if
the price of a good is zero, its demand will be innite.
Aggregate excess demand function is then given by
f (p) =
n
X
fi (p) =
f 1 (p) =
i=1
n
X
i=1
n
X
i=1
n
X
fim (p) :
i=1
> 0:
Since pf (p) = 0; if all but one market is in equilibrium (i.e. has excess
demand of zero), the remaining market must be in equilibrium as well. This is
called the Walras Law, and allows us to focus on m 1 markets rather than m.
Then, the fundamental question is if we can nd prices such that all markets
are in equilibrium, i.e.
f (p ) = 0:
Before going into the details of nding equilibrium prices, lets look at the
consumers problem in more detail. Following is the Lagrangian for the consumers problem.
6
L = ui (xi ) +
i (pwi
pxi ):
i p:
@xji
ip
pk
; for all i; k; h:
ph
We also know that since any two agents face the same prices, their marginal
rates of substitution must be same for any two goods, i.e.
@ui (xi )
@xk
i
@ui (xi )
@xh
i
@uj (xj )
@xk
j
@uj (xj )
@xh
j
; for all i; j; k; h:
(ui ; wi ):
n
X
xi
i=1
n
X
i=1
wi
(5)
w21
c2
w22
w21
c1
w11
c2
x
Excess
demand
for good 2
Excess
supply
of good 2
c1
Excess supply of good 1
We will next look at a simple example of an economy with two agents and two
goods. When we have a 2 2 economy, we can represent the total resources of
this economy as a box (like in Figure 3). Such a box is called an Edgeworth Box.
Given an initial endowment point w, we know that the equilibrium consumptions
and prices are then given by the tangency of two indierence curves (such as
point x in Figure 4).
Example 8 Consider the following version of a nite dimensional exchange
economy with two goods and two agents. Utility functions take the form
(
1
x1i
x2i
; for i = 1
ui (xi ) =
;
1
x1i
x2i
; for i = 2
where ;
2 (0; 1): Endowments are given by w1 = (1; 0) and w2 = (0; 1):
To nd demand functions of agents 1 and 2 for goods 1 and 2 (as functions of
prices and endowments), we rst need to set-up the optimization problem for an
agent. For agent-1, demand for goods 1 and 2 are chosen to solve the following
problem:
1
max
x11
x21
;
1 ; x2
x
f 1 1g
subject to
p1 x11 + p2 x21 = p1 :
Let 1 be the Lagrange multiplier associated with this budget constraint. Then,
it is easy to see that FOCs are given by
x11
x21
1p
and
(1
) x11
x21
1p
p1
p2
x11 ;
p1
p2
x11 = p1 :
Hence,
x11 +
x11 = 1;
or
x11 = ; x21 = (1
10
p1
:
p2
x12 =
Market clearing condition for good 1 is
x11 + x12 = 1;
hence
f 1 (p) = x11 + x12
1:
p2
=1
p1
p2
1
=
p1
As a normalization, let p2 = 1
p1 ; then
p1
p1
b
and p2 = 1
a+b
b
1 a+b b
1 a
=
=
:
a+b
1 a+b
1 a+b
p1
= (1
p2
) 1 1 a+b
a = (1
1 a+b
f 1 (p1 )
= a+b
=
0:5
p1
p1
p1
p1
1 p1
= 0:5
p1
1 p1
= 0:5
p1
1
0:5 p
=
p1
1
0:5
1
p1
Figure 5 shows how f 1 (p1 ) looks like. Since it crosses the horizontal line, there
is an equilibrium price. Furthermore, this price is unique.
11
3.5
2.5
1.5
0.5
-0.5
-1
0
0.2
0.4
0.6
0.8
1.2
1.4
1.6
1.8
n
X
x0i
i=1
wi :
i=1
0m
0m
x0m
1 + x2 + ::: + xn
(6)
For the consumer who strictly prefers x0 ; it must be outside his/her budget set
with p: Otherwise he/she would choose it in the rst place. Then,
px0j > pwj ;
or
2 02
m 0m
1 1
2 2
m m
p1 x01
j + p xj + ::: + p xj > p wj + p wj + ::: + p wj :
n
X
px0i >
i=1
n
X
pwi :
i=1
13
(7)
subject to
n
X
i ui (xi ); with
i=1
n
X
n
X
= 1:
i=1
n
X
wi
i=1
xi and xi
0:
i=1
wi =
n
X
xi ;
(9)
i=1
i=1
i p;
(10)
xi ) = 0;
(11)
wi =
i=1
n
X
xi ;
(12)
i=1
that they are equal to the Lagrange multiplier for the constraint for each good),
and i = 1i ; then (8) and (10) are identical as well. Then, whether a Pareto
optimal allocation can be decentralized comes down to whether the social planer
can make sure at prices ; the planners allocation is feasible for each consumer.
This might require redistribution of resources dened by the following transfer
functions
wi ) :
i ( ) = (xi
Theorem 12 (Second Welfare Theorem) Every Pareto Optimal allocations can
be decentralized as a competitive equilibrium, i.e. given a Pareto Optimal allocation x; we can nd a price vector p and transfers i such that: given the initial
endowments these transfers x is a competitive allocation with prices p:
Before we analyze some particular examples, note that the rst order conditions for the planners problem imply
@ui (xi )
i
@xji
for all i; j:
Then, we have
@ui (xi )
@xk
i
@ui (xi )
@xh
i
@uj (xj )
@xk
j
@uj (xj )
@xh
j
15
c2
c1
16
c2
w*
w
c1
17
Let be the Lagrange multiplier associated with this budget constraint. Then,
it is easy to see that FOCs are given by
x11
x21
= p1 ;
and
) x11
(1
= p2 :
x21
p1 1
p2
x11 ;
which then can be used to nd x11 and x21 by substituting it in the budget constraint:
p1 a1 + p2 b1
p1 a1 + p2 b1
; x21 = (1
)
:
x11 =
1
p
p2
Similarly for the second consumer we will get
x12 =
p1 a2 + p2 b2
; x22 = (1
p1
p1 a2 + p2 b2
:
p2
p1 a1 + p2 b1
+ (1
p2
p1 a2 + p2 b2
= b1 + b 2 :
p2
p1 ; we will
x21
; and M RS2 =
x11
1
x22
:
x12
Since
x12 = a1 + a2
x21 ;
x21
=
x11
1
b1 + b2 x21
:
a1 + a2 x11
(13)
Note that equation (13) provides a complete characterization of all Pareto optimal allocations. Any division of the total output between two agents that satises
this is a Pareto optimal allocations.
18
Example 14 Consider a 2x2 exchange economy. There are 2 goods and 2 individuals, and preferences and initial endowments are as follows:
u1 (x11 ; x21 ) = a log(x11 ) + (1
and
w1 = (w11 ; w12 ) = (1; 1); w2 = (w21 ; w22 ) = (2; 2):
Then, individual 1s problem is
u1 (x11 ; x21 );
max
x11 ;x21
subject to
p1 x11 + p2 x21
0; x21
0:
a) log(x21 )
p1 w11
p2 w12 ];
x11 :
and
(1
x21 :
a)
x21
1 1
=)
1 2
=)
p =0
p =0
a
=
x11
(1
a)
x21
1 1
p ;
1 2
p :
Hence, we get
x21 =
p1 1 (1 a)
x
:
p2 1 a
x11 =
and
x21 =
(1
a)(p1 + p2 )
:
p2
b
x12
x12 :
and
x22 :
(1
b)
x22
b) log(x22 )
2 1
=)
2 2
=)
p =0
p =0
19
b
=
x12
(1
b)
x22
p1 w21
2 1
p ;
2 2
p :
p2 w22 ];
These imply
x22 =
p1 1 (1 b)
x
:
p2 2 b
b(2p1 + 2p2 )
b(p1 w21 + p2 w22 )
=
;
1
p
p1
b)(p1 w21 + p2 w22 )
(1
=
2
p
(1
b)(2p1 + 2p2 )
:
p2
3=
(a + 2b)(p1 + p2 )
p1
3:
If we let p1 + p2 = 1; equate the excess demand to zero, and solve two equations
simultaneously, we get
p1 =
and
a + 2b
;
3
(a + 2b)
:
3
Then, plugging these into demand functions of each individual gives us the competitive allocations:
p2 =
x11 =
3(1 a)
3a
and x21 =
;
a + 2b
3 (a + 2b)
x12 =
6b
6(1 b)
and x22 =
:
a + 2b
3 (a + 2b)
and
a) log(x21 )] + (1
)[b log(x12 ) + (1
b) log(x22 )];
subject to:
x11 + x12 = 3
x21 + x22 = 3;
and
x11
0;
x21
x12
0;
0;
x22
0:
[a log(x11 ) + (1 a) log(x21 )] + (1
1 1
2 2
[x1 + x12 3]
[x1 + x22 3];
20
)[b log(x12 ) + (1
b) log(x22 )]
a
=
x11
x11 :
(1 a)
=
x21
x21 :
(1
x12 :
)b
x12
and
(1
x22 :
)(1
x22
=
b)
;
2
Therefore we have
a
(1
=
x11
3
and
)b
;
x11
a(1 a)
(1
)(1 b)
=
:
2
x1
3 x21
Hence,
x11
x12
3 a
; x2 =
)b + a 1
3(1
)b
; x2 =
(1
)b + a 2
(1
3 (1 a)
a) + (1
)(1
3(1
)(1 b)
(1 a) + (1
)(1
(1
b)
b)
;
:
By substituting these into FOCs of the problem, we can nd the values of the
Lagrange multipliers in the planners problem as
1
(1
(1
)b + a
; and
3
a) + (1
)(1
3
b)
3a
3(1 a)
and x21 =
b+a
(1 a) + (1
21
b)
(14)
3b
3(1 b)
and x22 =
b+a
(1 a) + (1
b)
(15)
tr1 ( ) =
3 a
)b + a
(1
1 ;
3 (1 a)
a) + (1
)(1
b)
3(1
)(1 b)
(1 a) + (1
)(1
b)
(1
and
tr2 ( ) =
3(1
(1
)b
)b + a
2 ;
First one is the transfer to individual one and the one is transfer to individual
2. Notice that these terms should sum up to zero. One can plug 1/2 for and
nd the transfers for the above case.
Hence, we can now answer the following question. Suppose prices are given
by (16), and individual 1s endowments are:
1+
and
1+
1
2 (1
3 21 a
1
1
2 )b + 2 a
(1
3 12 (1 a)
a) + 12 (1
b)
(1
3a
of good 1,
b+a
3(1 a)
a) + (1
b)
of good 2,
3a
a( b+a
6 b+a +
= a
= a
(1 a)+(1 b)
3(1 a)
6
(1 a)+(1 b) )
3a
b+a
3a 3(1 a)
6
+
6
6
a+b
1
6
3a
=
;
2 a+b
a+b
22
References
[1] Arrow, K. J. The Role of Securities in the Optimal Allocation of RiskBearing," Review of Economic Studies, 31, 91-96, 1964, (translation of
original 1953 article from French).
[2] Arrow, K. J. G. Debreu, Existence of An Equilibrium For A Competitive
Economy," Econometrica, 22265-290, 1954.
[3] Debreu, G. Theory of Value, Wiley, 1959.
[4] Farmer, Roger E. A. General Equilibrium under Certainty (Chapter 4)
The Macroeconomics of Self-Fullling Prophecies, The MIT Press, Cambridge, MA, 1993.
[5] Mass-Colell, Michael D. Whinston, and Jerry R. Green, Microeconomic Theory, Oxford University Press, 1995.
[6] McKenzie, "On Equilibrium in Grahams Model of World Trade and Other
Competitive Systems, Econometrica, 22, 147-61, 1954.
[7] Varian, Hal R. Exchange (Chapter 17) in Microeconomic Analysis, 3rd
Edition, W.W. Norton and Company, New York, 1992.
23
In our static Walrasian economy agents live for a single period. In this section
we will analyze model economies where they live forever. We will assume that
time is discrete and the horizon is innite, t = 0; 1; 2; :::::As in the previous
section there is a nite number of agents indexed by i = 1; :::n: We will assume,
however, there is one consumption good per period, i.e. m = 1 (see Kehoe
(1989) for an analysis with m > 1): This consumption goods is not storable.
Agents have deterministic endowment streams. The endowment stream of agent
1
i is denoted by wi = wti t=0 :
1
Let cit be the consumption of agent i at time t; and let ci = cit t=0 be a
consumption sequence. Agents preferences are given by
U (ci ) =
1
X
t
i
i ui (ct );
t=0
t=0
After consumers choose their consumption sequences at this time zero market,
time does not play an explicit role. As time passes transactions (exchange of
goods) that were agreed at time zero take place. We assume that all contracts
that are agreed at time 0 are honored. We call this market arrangements ArrowDebreu markets. We will normalize prices and set p0 = 1:
Denition 15 An Arrow-Debreu equilibrium is a sequence of allocations ci =
1
1
cit t=0 for each i; and a sequence of prices p = fpt gt=0 such that
1. Given p, ci solves agent i0 s maximization problem for each i:
max
i
c
subject to
1
X
1
X
t
i
i ui (ct );
(17)
t=0
pt cit
t=0
1
X
t=0
24
pt wti :
(18)
n
X
n
X
cit
i=1
(19)
i=1
Remark 16 It is immediate
that for consumers maximization problem to exist
P1
it must be case that
p
wti is nite. Otherwise, there is no well dened
t
t=0
solution for the maximization problem.
P1
i
Remark 17 If
t=0 pt wt is nite, then the value of aggregate endowment is
also nite in an Arrow-Debreu equilibrium, since
!
!
1
n
n
1
X
X
X
X
pt
wti =
pt wti :
t=0
i=1
i=1
t=1
t=0
t
cjt )
j uj (e
>
t=0
j
t
j uj (ct ):
t=0
e
cjt ,
1
X
e
cit
n
X
i=1
t=1
i=1
t=1
t=0
i=1
t=1
t=0
i=1
t=0
i=1
Remark 19 Note that this proof follows exactly the same steps as the proof of
First Welfare Theorem in the last section.
25
(20)
Equilibrium allocations must also satisfy the budget constraint of each individual
1
1
X
X
pt cit =
pt wti :
t=0
t=0
There is one budget constraint per consumer. Finally, they have to be feasible
n
X
n
X
cit
i=1
wti :
i=1
This feasibility constraint has to hold every period, since the good is perishable.
What does equation (20) tell us about consumer behavior? Note that for
any consumer i; and any two time periods t and t + 1; we have
i
t @ui (ct )
i @cit
i
t+1 @ui (ct+1 )
i
@cit+1
pt
pt+1
Hence,
@ui (cit )
=
@cijt
pt @ui (cit+1 )
:
pt+1 @cijt+1
(21)
This equation, which will appear many times in this course, is the intertemporal
optimization condition for the consumer.
If the consumer allocates his/her resources optimally, the cost of reducing
@ui (cit )
time-t consumption today, @c
; must be equal to the benet of increasing
i
t
time-t + 1 consumption,
@ui (cit+1 )
;
@cit+1
pt
; and the relative value of goods between two periods, pt+1
: Discounting makes
pt
future consumption less valuable. If pt+1 is high, benet of moving resources
pt
from t to t + 1 is large, and if pt+1
is low, then the benet of moving resources
from t to t + 1 is small.
Given FOCs for individual i and j we also know that their consumption at
time period will be related by
i
t @ui (ct )
i @cit
t
j
@ui (cjt )
@cjt
26
i
i ui (ct )
subject to
n
X
cit =
i=1
n
X
wti ; t = 0; 1; 2::::
i=1
@ui (cit )
=
@cit
t;
cit (
where
) is a Pareto optimal allocation of goods.
We also know that if we can nd an
; such that i ( ) = 0 for all i;
then the t ( ) are the Arrrow Debreu prices and allocations cit ( ) are ArrowDebreu allocations for this economy. The following example illustrates how this
approach can be an easier way to nd Arrow Debreu allocations than solving
the Arrow Debreu equilibrium directly. This method of computing competitive
equilibrium was formulated by Negishi (1960).
Example 20 Let n = 2 and
u1 (ct ) = u2 (ct ) = log(ct );
and
wt1 = wt2 = 1 for all t:
Also let the consumers di er in their discount factors with 1 < 2 : An Arrow
Debreu equilibrium for this economy is characterized by the following tree sets
of equations:
t 1
i
pt ; for i = 1; 2 and t = 0; 1; 2; :::;
i i =
ct
1
X
pt cit =
t=0
and
1
X
pt for i = 1; 2;
t=0
t=0
t=0
27
subject to
c1t + c2t = 2 for all t:
Then the Pareto optimal allocations must satisfy the following conditions
t 1
i i i
ct
for i = 1; 2;
and
c1t + c2t = 2 for all t:
Using the FOCs for i = 1; 2 we have
t 1
1 1 1
ct
t 1
2 2 2:
ct
Hence,
t
2 1
t ct .
1
c2t =
c1t 1 +
t
2
t
1
= 2:
t
i i
2
t
1 1
t
2 2
for i = 1; 2;
(22)
and the Lagrange multipliers for the planners resource constraint are given by
t
t
1 1
+
2
t
2 2
(23)
Note that to nd t ; we simply used the FOC, i ti c1i = t : Then we can decent
tralize the allocations in (22) using prices in (23) and the following transfers:
1(
1;
2) =
1
X
1
t (ct
1) =
t=0
and
2( 1;
2)
1
X
2
t (ct
1) =
t=0
28
1
;
2
1
:
2
1
2
Note that
is calculated as:
1(
1;
2)
=
=
=
=
1
X
1
t (ct
t=0
1
X
t
1 1
t=0
1
X
t
1 1
t=0
1
X
1)
+
2
t
2 2
+
2
t
2 2
t
1 1
and
1)
t
2 2
t
1 1
t
1 1
t
1 1
t
2 2
t
2 2
2
1
2(1
2
t
1 1
t
2 2
t=0
t
1 1
2
1)
2(1
that makes
1
2(1
1)
2)
1( 1;
2)
2(1
2)
equal to 0 are
= 1 and
1
1
1
1
1
1
t
1
and
c2t =
2
1
1
t
1
1
2
1
2
t
2
t
1
t
2
t
2
To prove our claim lets go back to competitive allocations. The FOCs for the
consumer i was
t 1
i
pt :
i i =
ct
Lets focus on i = 1; using the FOC for t and t + 1; we get
t 1
1 c1t
t+1 1
1
c1
t+1
29
pt
:
pt+1
Hence,
c1t+1 =
pt
pt+1
1
1 ct :
p0
pt
t 1
1 c0 :
1 p0
2
1
or
c10 p0 1 +
Then,
c10
and
c1t =
2
1
P1
p0
pt
t=0 pt
p0 1 1
1
t 1
1 c0
(1
1)
P1
t=0
pt
p0
t (1
1
p0
pt
1)
P1
t=0
pt
p0
Since a similar rule also determines the consumption behavior for the consumer
2, we have the following market clearing condition for t = 0;
P1
P1
(1
(1
1)
2)
t=0 pt
t=0 pt
+
= 2:
p0
p0
Therefore,
P1
P1
pt + (1
2)
t=0 pt
:
2
Indeed, using the market clearing condition for any period, we can get
P1
t
t
2)
1 ) + 2 (1
1 (1
t=0 pt
pt =
:
2
p0 =
(1
1)
Then,
c10 =
(1
1)
t=0
P1
t=0
pt
p0
and
c1t
2 11
1
1
1
2
2(1
1)
)
+
(1
1
(1
1
2
t
1
t
1
t
2
2)
It is then trivial to check that these are the allocations we nd by solving the
planners problem and setting the transfers to zero. In this example, it is much
easier to calculate Pareto optimal allocations than competitive (Arrow-Debreu)
allocations.
30
Example 21 Consider a simple exchange economy with two consumers, indexed by i = 1; 2; who live forever, and with one perishable consumption good.
Time is discreet and indexed by t = 0; 1; :::::. Each consumer values sequences
1
of consumption goods, ci = cit t=1 according to
U (ci ) =
1
X
ln(cit );
t=0
with
wti =
1
t=0
subject to
1
X
pt cit =
t=0
ln(cit );
1
X
pt wti ;
t=0
1
X
ln(c1t ) + (1
t=0
subject to
c1t + c2t = 2 if t is even,
and
c1t + c2t = 1 if t is odd.
31
) ln(c2t ) ;
c1t
(1
c2t
(note that given resources in any period, this condition equates MRSs for agent
1 and 2). Hence,
(1
) 1
c2t =
ct :
You can now use the budget constraint to arrive at
c1t = 2 ; and c2t = 2
if t is even,
and
c1t = ; and c2t = 1
if t is odd.
if t is even and
if t is odd.
if t is even, and pt =
t1 ( ) =
1
X
pt (cit ( )
if t is odd, and nd
wti );
t=0
2.1
Sequential Equilibrium
Our analysis above was built on Arrow-Debreu markets where all trade take
place at a time-0 market. There were no other market in any other period.
Now we look another possible market arrangement where we have a market
each period. Suppose now trades take place in spot markets that open every
period. Hence, at time t; agents only trade time-t goods in a spot market. Let
qt be the price of the good in this time-t market. If agents can only trade time-t
good at time t; and there no credit arrangements, then this economy would look
like a sequence of static exchange economies.
With spot markets we need a credit mechanism that will allow agents to
move their resources between periods. Therefore, we will assume that there is a
one period credit market that works as follows: each period agents can borrow
et = 1 + ret be the gross interest
or lend in this one period credit market. Let R
rate on time-t borrowing (lending): How does this credit system work? One can
think of a central credit agency that keeps track of people who borrow and lend.
If you want to lend you bring your goods to the agency which gives it to other
people. Next period you can go and receive your goods back (plus the interest)
or bring your goods to pay your debt. We assume everybody honors his/her
contract and there is perfect record keeping.
32
Then, each individual will have a sequence of budget constraints (rather than
a single one as it was the case with Arrow-Debreu markets):
q0 ci0 + l0i
q1 ci1 + l1i
= q0 w0i
= q1 w1i + (1 + re0 )l0i
::::
= qt wti + (1 + ret 1 )lti
::::
qt cit + lti
(24)
1
X
t
i
i ui (ct );
t=0
q0 ci0
q1 ci1
= qt wti + (1 + ret
= qt wti + (1 + ret
i
1 )lt 1
i
1 )qt 1 wt 1
+ (1 + ret
1 )(1
(1+e
rt
r0 )
1 ):::::(1+e
; we get
+ ret
+ ret
i
1 )qt 1 wt 2
i
1 )qt 1 ct 2
qt wti
(25)
(1 + ret 1 ):::::(1 + re0 )
qt 1 wti 1
+
+ ::: + q0i w0i
(1 + ret 1 ):::::(1 + re0 )
qt 1 cit 1
qt cit
(1 + ret 1 ):::::(1 + re0 ) (1 + ret 1 ):::::(1 + re0 )
:::
q0i ci0 :
33
Note that the right hand side of this equation is nothing but the time-0 present
value of agents resources minus the present value of his/her consumption. Since
in this economy credit arrangements simply move resources between period and
do not add any resources to the agents budget constraint, we need to impose
the following condition
lti
= 0:
t!1 (1 + r
et 1 ):::::(1 + re0 )
(26)
lim
qt
(1 + re0 )(1 + re1 )::::(1 + ret
1)
qt
:
t 1
et )
t=0 (1 + r
(27)
X
lti
=
pt wt
(1 + ret 1 ):::::(1 + re0 )
t=0
t
X
pt ct :
t=0
Thus the condition (26) simply makes sure that as t ! 1; agents are on their
budget constraint:
1
1
X
X
pt wt =
pt ct ;
t=0
t=0
and if agents resources are nite, then there is a well-dened solution to agents
maximization problem. How can we make sure that (26) is satised? This can
be achieved by placing an upper bound, call it B i ; on how much each agent can
borrow. Note that any level of borrowing limit, even a very large one, will still
preclude the possibility of a Ponzi game.
In this environment given a sequence of budget constraints, the agents problem is characterized by the following set of FOCs:
(cit )
=
@cit
t @u
t qt ;
+ (1 + ret )
34
t+1
= 0;
for lti ; where t is the Lagrange multiplier for time-t budget constraint.
We can combine these two conditions to arrive at
i
qt
or
t @u (ct )
@cit
= (1 + ret )
t+1 @u (ct+1 )
@cit+1
qt+1
@ui (cit )
qt @ui (cit+1 )
= (1 + ret )
:
i
qt+1
@ct
@cit+1
(28)
This is exactly the intertemporal optimization condition we had for ArrowDebreu economy, see equation (21), since using equation (27), we have
@ui (cit )
@cit
=
=
(1 + ret )
pt
pt+1
@ui (cit+1 )
:
@cit+1
In this economy, the credit balances are recorded in terms of the value of
time-t goods. Time-t resource constraint implies
lti = qt wti
cit ;
indicating the time t value of your credit balance. Next period, the budget
constraint is
i
qt+1 cit+1 + lt+1
= qt wti + (1 + ret )lti ;
and you pay (or receive) to the credit agency a credit in the amount of (1 + ret )lti :
(1+e
r )li
li
Hence, you borrow qtt units of time-t goods and pay back qt+1t t : Then, the
interest rate in terms of physical quantities is
1 + rt =
(1+e
rt )lti
qt+1
lti
qt
(1 + ret )lti qt
(1 + ret )qt
=
:
qt+1 lti
qt+1
(29)
Rather than keeping the track of goods in terms of the value of goods, the credits
could also be recorded in the physical amount of goods. Suppose if you borrow
(or lend) lti units of goods in time t; then at time t + 1 you pay (or receive)
lti (1 + rt ) units of the good.
The spot prices are not necessary then to dene an equilibrium in this environment. Interest rates, which are dened in way, already contain the formation
about the relative value of the good. In this set-up the budget constraint of the
individual is
ci0 + l0i
ci1 + l2i
cit + lti
= w0i
= w1i + (1 + r0 )l0i
::::
= wti + (1 + rt 1 )lti
::::
35
(30)
(31)
pt
) will appear may times as we move forward in this
This relation (1 + rt = pt+1
course.
We are now ready to dene a sequential equilibrium
1. Given r, ci
max
i
c
1
X
t
i
i ui (ct );
(33)
t=0
subject to
cit + lti
and
2. Markets clear
cit
lti
= wti + (1 + rt )lti
(34)
cit
i=1
and
n
X
(35)
i=1
(36)
It should be rather straightforward that consumption allocations in an ArrowDebreu equilibrium and in a sequential equilibrium are identical.
References
[1] Kehoe, T. Intertemporal General Equilibrium Models, in The Economies
of Missing Markets, Information, and Games, Frank Hahn (ed.), 1989.
[2] Negishi, T. Welfare Economics and Existence of An Equilibrium For a
Competitive Economy," Metroeconomica, 12, 92-97.
36
Stochastic Endowments
Everything was certain in the economies we have analyzed so far. But we all
know that uncertainty is an important element in many economic activities. We
will now extend our previous analysis in to a stochastic environment.
We assume time is discrete and the time horizon is innite, t = 0; 1; 2; :::::There
is again a nite number of agents indexed by i = 1; :::n and one consumption
good per period. This consumption goods is not storable. In contrast to the
previous set-up endowments depend on the state of the economy. The state of
the economy is uncertain. There are, for example, good days and bad days,
sunny days and rainy days. Yet this uncertainty has a well-dened structure.
We will let st denote the state of the economy at time t. It is a stochastic
variable, and we will assume that st can take values from a given nite set S.
For example if S = fgood; badg : Then, each period either st = good or st = bad:
We assume that st follows a Markov process, i.e. probability that st+1 = s0 only
depends on the current state st : We will let
(s0 js) = prob(st+1 = s0 jst = s);
represent the transition probability from st = s to st+1 = s0 : Since this is a
transition probability it must satisfy
X
(s0 js) = 1:
s0 2S
We assume that st is publicly observed. Hence, given any value of s0 ; we can use
transition probabilities to determine the probability of any particular sequence
of states occurring. We will call a possible realization of states up to time t a
history, denoted by st :
st = [st ; st 1 ; ::::; s0 ] :
Given s0 , the probability of a particular history st is
(st jso ) = (st jst
1 ):::::
Figure (8) illustrates possible 3 period histories when st can take two values
from the set S = fh; lg and s1 = l:
Below we will assume that at t = 0; s0 is known. Otherwise we have to
assume a probability distribution over s0 : If s0 is known, then the transition
probabilities characterize the stochastic structure of this economy. We will
analyze Markov processes in more detail below.
We assume that agents endowments depend on st : In particular we assume
that
wti = wi (st );
i.e. agent i0 s endowments is a time invariant function of st : For example, if
S = fl; hg ; then
2 if st = h
wti =
0 if st = l;
37
s 2= l
s2 = [l,1,1]
s 1= l
s0= l
s2 = [h,1,1]
s 2= h
s2= l
s2 = [l,h,1]
s2= h
s2 = [h,h,l]
s1= h
1
t=0
where agent decide his/her consumption for every date and for every possible
realization of the history.
Agents try to maximize the expected value of their utility dened as
"
#
1
X
X
t
i
i t
t
U (c ) =
u(ct (s )) (s js0 ) ;
(37)
t=0
st
where the term st t u(cit (st )) (st js0 ) represents the expected utility of timet consumption plan (given s0 ): U (ci ) is called an expected utility
P function or
sometimes a von-Neuman Morgenstein utility function. We use st to denote
summation over all possibly histories that can happen up to time t: We assume
that u is strictly increasing, strictly concave, and satises limc!0 u0 (c) = 1:
Let pt (st ) be the price of time-t, history st goods at time 0. Then the budget
constraint of the individual is
1 X
X
t=0
st
1 X
X
t=0
38
st
(38)
1 X
X
(39)
st
t=0
t=0 st
1 X
X
(40)
t=0 st
2. Markets clear
n
X
cit (st )
i=1
n
X
(41)
i=1
Pn
Pn
i
Note that when we write i=1 cit (st )
i=1 wt (st ); the consumption det
pend on s while the endowments depend on st : This is ne, since any history st
implies a nal state st . In what follows we will normalize prices by p0 (s0 ) = 1:
The FOCs for the consumer are given by
t 0
(42)
to arrive at
t 0
or
u0 (cit (st )) =
t+1 0
pt (st )
(st+1 js0 ) 0 i
u (ct+1 (st+1 ));
t+1
pt+1 (s ) (st js0 )
For any two histories st+1 and st such that st+1 = [st+1 ; st ] ; we have
(st+1 js0 ) = (st+1 jst ) (st js0 ):
39
(43)
pt (st )
(st+1 jst )u0 (cit+1 (st+1 )):
pt+1 (st+1 )
c1
1
with
> 0;
we get
1
u0 (c1t (st )
cit (st ) =
I
X
u0
u0 (c1t (st )
i
1
i=1
I
X
wti (st ):
i=1
Since the right hand side of this equation only depends on st the left hand side
must also only depend on st ; hence we have
cit (st ) = cit (st ) for all i:
Example 25 Suppose n = 2; and
ui (cit ) = log(cit ) for i = 1; 2:
40
22
and
11 ;
21
12 :
and
wt2 = 3
st :
Hence, each period there is xed amount of good (i.e. there is no aggregate
uncertainty)
w = wt1 + wt2 = 3 for all t:
The FOC for cit (st ) is
t
1
(st js0 ) =
cit (st )
Since p0 (s0 ) = 1;
1
=
ci0 (s0 )
pt (st ):
(s1 js0 ) =
1
p1 (s1 );
i
c0 (s0 )
ci0 (s0 )
(s1 js0 ) = p1 (s1 ):
ci1 (s1 )
Here s1 is a history for t = 1: There are two possible histories [1; 1] and [2; 1]:
Since ; (s1 js0 ); and p1 (s1 )_ is same for both agents we have
c20 (s0 )
c10 (s0 )
=
:
c1t (s1 )
c2t (s1 )
Since c10 (s0 ) + c20 (s0 ) = 3; and c11 (s1 ) + c21 (s1 ) = 3; it must be the case that
c10 (s0 ) = c11 (s1 ) = c1 and c20 (s0 ) = c21 (s1 ) = c2 ;
where c1 + c2 = 3: Indeed one can easily show that this will be true for any time
period. Hence, agents consume a xed amount of goods every period. Then,
ci0 (s0 )
(s1 js0 ) = p1 (s1 ):
cit (s1 )
implies
p1 (s1 ) =
41
(s1 js0 ):
pt (st ) =
(st js0 ):
= c1 +
p1 (s1 )c1 +
s1
s2
s2
s1
s2
= c1 1 +
=
c1
1
X
s1
(s1 js0 ) +
s2
Therefore,
1
c = (1
and
2
c = (1
"
) 1+
"
) 2+
p1 (s
)w11 (s1 )
s1
)w21 (s2 )
)w22 (s2 )
p2 (s
s2
p1 (s
)w12 (s1 )
s1
p2 (s
s2
+ :::::: ;
#
+ :::::: :
u00 (c)
:
u0 (c)
u00 (c)c
:
u0 (c)
Note that for a linear utility function both CARA and CRRA are zero.
Agents with linear utility are called risk neutral. Agents with concave utility functions are risk averse, and the curvature of the utility function determines the risk aversion.
42
c1
1
Then,
CRRA =
c
c
)c
= ;
and for this utility function determines both the elasticity of intertemporal substitution and the coe cient of relative risk aversion.
3.1
Asset Pricing
where P0 ;0 indicates that the asset starts payments at time 0, and its price is
measured in terms of the value of the goods at time-0.
Example 27 Let d(st ) = 1 for all st :The asset pays 1 unit in every date and
in every possible state. The cost of this asset is
P0;0 =
1 X
X
pt (st ):
t=0 st
Suppose now we take an asset fd(st )gt=0 ; and get rid of the rst payments,
i.e. d(st ) = d(st ) if t > ; and 0 otherwise. The asset pays according to function
d(st ) starting at date + 1: How much this asset is valued at time 0? Obviously
43
1
X
;0 (s ) =
;0 (s
pt (e
st )d(e
st );
t= +1 fe
st :e
s =s g
P
where fest :es =s g indicates that we are only summing over future histories that
has the right s :
What is the value of this asset in terms of time goods? It is simply
P ;0 (s )
;
p (s )
where p (s ) is the price of time- ; history s goods at time-0. Then,
P
1
X
(s ) =
t= +1 fe
st :e
s =s g
pt (e
st )
d(e
st ):
p (s )
u0 (cit (st ))
(st js );
u0 (ci (s ))
1
X
t= +1 fe
st :e
s =s g
u0 (cit (st ))
(st js )d(e
st ):
u0 (ci (s ))
u0 (cit (st+1 ))
d(st+1 ) ;
u0 (cit (st ))
Lets look at
Pt (st ) = Et
u0 (cit (st+1 ))
d(st+1 ) ;
u0 (cit (st ))
44
d(st+1 ) 0 i t+1
u (ct (s )) :
Pt (st )
For an asset that pays d(st+1 ) next period and nothing else,
represents its return. Then
u0 (cit (st )) =
d(st+1 )
Pt (st )
= Rt+1
or
Et [Rt+1 ] =
1
Et u0 (cit (st+1 ))
u0 (cit (st ))
What does the covariance term indicate about asset prices? Suppose cit (st+1 )
and d(st+1 ) move together, i.e. the asset pays you a high amount when your
consumption is high. Then, Covt u0 (cit (st+1 ))Rt+1 is negative and Et [Rt+1 ]
must be higher. Hence, if an asset is highly correlated with your consumption,
then its return must be high. You need a high return to hold this asset because
it is risky. If on the other hand cit (st+1 ) and d(st+1 ) move in opposite direction,
the asset is less risky and the return can be lower. The risky asset has to pay a
premium. We call this the risk premium.
Finally note that given
" 1
#
0 i k
X
(s
))
u
(c
k
t
t
d(sk ) :
Pt (st ) = Et
u0 (cit (st ))
= Et
k=t+1
u0 (cit (st+1 ))
d(st+1 )
u0 (cit (st ))
and
t+1
Pt+1 (s
= Et+1
= Et+1
"
1
X
(cit (st+2 ))
d(st+2 ) + :::: ;
u0 (cit (st ))
2u
#
(cit (sk ))
d(sk ) :
u0 (cit (st ))
k t 1u
k=t+2
u0 (cit (st+2 ))
d(st+2 )
u0 (cit (st+1 ))
(cit (st+3 ))
d(st+3 ) + :::: :
u0 (cit (st+1 ))
2u
u0 (cit (st+1 ))
[d(st+1 ) + Pt+1 (st+1 )] :
u0 (cit (st ))
Then,
u0 (cit (st )) = Et
u0 (cit (st+1 ))
(45)
is the return on this asset. This should not be surprising since we arrive at these
asset pricing functions from agents optimization problem.
Remark 29 Material in this section follows Ljungqvist and Sargent (2004).
Remark 30 Note that asset pricing equations that we analyze above provides a
relation between prices and quantities, but do not go far in enough to drive an
asset pricing function that maps prices to the fundamentals of the economy in
equilibrium. This is done in Lucas (1978).
Remark 31 Mehra and Prescott (1982) apply Lucas (1978) framework to the
U.S. data and investigate if the risk premium implied by the model is consistent
with the data.
References
[1] Ljungqvist, Lars and Thomas J. Sargent. 2004. Competitive Equilibrium
with Complete Markets (Chapter 8), in Recursive Macroeconomic Theory, MIT Press.
[2] Lucas, Robert E., Jr. "Asset Prices in an Exchange Economy," Econometrica, 46(6), 1978, 1429-1445.
[3] Mehra, Rajnish and Prescott, Edward C., The Equity Premium: A Puzzle,
Journal of Monetary Economics, vol. 15, March 1985, pages 145-161.
46
Overlapping Generations
4.1
Exchange Economy
1
old (initial)
young
..
.
TIME
2
old
young
..
.
old
young
..
.
:
..
.
There are N (t) members of generation t: Let cht (t) and cht (t + 1) be the
consumption of agent h in generation t at time t (when young) and at t + 1
(when old), respectively. Similarly let wth (t) and wth (t + 1) be the endowment
of agent h in generation t at time t (when young) and at t + 1 (when old). At
time 1, there is N (0) old people (the initial old).
We will assume that the preferences of agent h in generation t > 1 is repre2
! R is dierentiable, with
sented by uht (cht (t); cht (ct+1 )); where u : R+
@uht cht (t); cht (t + 1)
@ 2 uht cht (t); cht (t + 1)
> 0 and
< 0 for j = 0; 1:
2
h
@ct (t + j)
@ cht (t + j)
We will also assume that the initial old have preferences represented by a strictly
increasing utility function.
Remark 32 We will also use chyt and chot+1 to denote an agents consumption
h
h
to denote his endowments). Furwhen young and old (similarly wyt
and wot+1
thermore, if it is understood that every generation is identical and all agents in
47
each generation are identical, we will simply refer to the consumption when old
and when young as cy and co (or c1 and c2 ):
Denition 33 A consumption allocation is a sequence
cht 1 (t)
C=
N (t 1)
h=1
; cht (t)
N (t)
h=1
t=1
N (t)
C(t) =
cht (t)
h=1
{z
Ct (t)
+
|
h=1
N (t)
cht 1 (t)
{z
Ct
1 (t)
h=1
N (t 1)
wth (t)
{z
Yt (t)
+
|
h=1
Yt
1 (t)
4.2
Y
:
N
Competitive Equilibrium
48
c1
Y/N
slope = -1
Y/N
c2
4.2.1
Arrow-Debreu Equilibrium
Suppose all agents (born and unborn) participate in a time-1 market where they
can sell their endowments and buy consumption goods. Let p(t) be the price of
time-t goods in this market. Then the problem of agent h in generation t > 1 is
max
h
ch
t (t);ct (t+1)
P(1)
subject to
p(t)cht (t) + p(t + 1)cht (t + 1)
cht (t)
0; cht (t + 1)
0:
P(2)
ch
0 (1)
subject to
p(1)ch0 (1)
p(1)w0h (1):
b=
Then, an Arrow-Debreu equilibrium is an allocation C
1
b
cht 1 (t)
N (t 1)
h=1
; b
cht (t)
N (t)
h=1
1
t=1
N (t 1)
cht (t) +
h=1
N (t)
cht 1 (t)
h=1
N (t 1)
wth (t) +
h=1
wth 1 (t):
h=1
max
h
ch
t (t);ct (t+1)
p(t)cht (t)
p(t + 1)cht (t + 1) :
and
p(t) = 0;
p(t + 1) = 0;
@cht (t
p(t)
=
p(t + 1)
@uh
t
@ch
t (t+1)
| {z }
M RS
Note that this is again the standard optimality condition that equates the M RS
to the price ratio.
If
uht cht (t); cht (t + 1) = u(cht (t)) + u(cht (t + 1));
this condition gives us the intertemporal optimization condition
u0 (cht (t)) =
p(t)
u0 (cht (t + 1));
p(t + 1)
50
4.2.2
Sequential Markets
We could also imagine a market economy in which individuals trade their endowments using one period lending and borrowing arrangements to maximize
their lifetime utility, and in which market clearing determines the interest rate.
Let R(t) be the real gross interest rate between t and t + 1; representing how
much time t + 1 goods market is willing to pay for each unit of time t goods.
Then, the consumers problem is given by
max
h
h
ch
t (t);ct (t+1);lt
P(3)
subject to
cht (t)
wth (t)
lth
|{z}
lending/b orrowing
cht (t + 1)
cht (t)
0:
P(4)
ch
0 (1)
subject to
ch0 (1)
w0h (1):
cht (t + 1)
R(t)
wh (t + 1)
:
wth (t) + t
R(t)
|
{z
}
lifetim e wealth
Note that lending/borrowing does not appear in this budget constraint since
they are simply ways to allocate resources between two periods.
An agents optimal decision will be determined by the solution of the following problem:
L=
max
h
ch
t (t);ct (t+1)
wth (t) +
51
wth (t + 1)
R(t)
cht (t)
cht (t + 1)
:
R(t)
@cht (t
= 0;
1
= 0;
R(t)
@uh
t
@ch
t (t)
@uh
t
@ch
t (t+1)
| {z }
M RS
cth (t+1)
cth (t+1)
slope = -R(t)
wth (t+1)
cth (t)
wth (t)
cth (t)
52
h=1
e
lth = 0:
Remark 43 Note that for any given exchange economy Arrow-Debreu equilibrium and sequential market equilibrium are equivalent. That is given any
1
b and fb
Arrow-Debreu equilibrium, C
p(t)gt=1 ; nwith pb
> 0 for all t; there is a
o(t)
1
e
e
e > 0 for all t;
corresponding sequential equilibrium, C and R(t)
with R(t)
t=1
b and C
e are identical. Similarly, given any sequential equilibrium, C
b
such nthat C
o1
b
b
and R(t)
; with R(t) > 0 for all t; there is a corresponding AD equilibt=1
1
e and fe
b and C
e are identical.
rium, C
p(t)gt=1 with pe(t) > 0 for all t; such that C
N (t)
cht (t) =
h=1
N (t)
wth (t)
h=1
lth ;
h=1
N (t 1)
cht 1 (t)
= R(t
1)
h=1
N (t 1)
lth 1
h=1
wth 1 (t):
h=1
Then
N (t)
h=1
N (t 1)
cht (t)+
h=1
N (t)
cht 1 (t) =
N (t)
wth (t)
h=1
N (t 1)
lth +R(t 1)
h=1
h=1
N (t 1)
lth
= R(t
1)
h=1
h=1
53
lth 1 :
N (t 1)
lth 1 +
h=1
wth 1 (t):
l0h = 0;
h=1
then,
N (t)
h=1
Example 45 Consider an OLG economy with identical agents and no population growth. Let
ut (cyt ; cot+1 ) = log(cyt ) + log(cot+1 );
and
[wyt ; wot+1 ] = [3; 1] :
Then
max log(cyt ) + log(cot+1 )
cyt ;cot
subject to
cyt +
cot+1
R(t)
wyt +
wot+1
:
R(t)
With
L = max flog(cyt ) + log(cot+1 ) + (wyt +
cyt ;cot+1
wot+1
R(t)
cyt
cot+1
)
R(t)
1
cot+1
= 0;
1
= 0:
R(t)
Therefore
1
cot+1
cyt +
1 1
! cot+1 = R(t)cyt ;
R(t) cyt
wot+1
wot+1
R(t)cyt
= wyt +
) 2cyt = wyt +
R(t)
R(t)
R(t)
and,
1
wot+1
1
[wyt +
]; cot+1 = [wyt r(t) + wot+1 ]:
2
R(t)
2
Let lt be the savings given by
cyt =
lt = wyt
cyt =
1
wyt
2
1 wot+1
:
2 R(t)
wot+1
1
= :
wyt
3
54
b
is both feasible and Pareto superior to C: Note that all generations prefer C;
since
log(2) + log(2) > log(3) + log(1) for all generations t 1;
and
log(2) > log(1) for the initial old.
Example 47 Consider an economy with agents having similar utility functions
as the previous example
max log(cyt ) + log(cot+1 );
cyt ;cot+1
but let the economy be populated by two types of agents that di er in their endowments
1
1
2
2
wyt
; wot+1
= [1; 1] and wyt
; wot+1
= [2; 1] :
Now type-2 agents want to lend when they are young, and might be able to do
this by o ering an interest to type-1 agents. If we go through the same exercise
as above for each type of agents, we will get
1
lt1 = wyt
c1yt =
1 1
w
2 yt
1
1 wot+1
1
=
2 R(t)
2
1
2R(t)
2
lt1 = wyt
c2yt =
1 2
w
2 yt
2
1 wot+1
=1
2 R(t)
1
:
2R(t)
and
Therefore
lt1 + lt2 = 0 )
and
c1y =
1
2
1
+1
2R(t)
1
2
= 0 ) R(t) = ;
2R(t)
3
w1
1 1
5
[wyt + ot+1 ] = ) lt1 =
2
R(t)
4
55
1
;
4
c1ot+1 = R(t)c1yt =
c2yt =
w2
1 2
7
1
[wyt + ot+1 ] = ) lt1 = ;
2
R(t)
4
4
c2ot+1 = R(t)c2yt =
Remark 48 Note again that R(t) =
ments of young and old.
4.3
52
5
= ;
43
6
2
3
72
7
= :
43
6
We will now investigate Pareto optimality in simple OLG setting that we constructed above. We will go through several claims:
Claim 1 If an allocation is PO, then it is e cient.
Claim 2 Suppose that ut (cht (t); cht (t + 1)) is the utility function of person h in
generation t; t 1: Let MRS be dened as
uht1 (cht (t); cht (t + 1))
;
uht2 (cht (t); cht (t + 1))
where uhtj is the partial derivative of utility function with respect to jth
argument. Suppose that h and h0 are two members of generation t: A
feasible allocation that assigns positive 1st and 2nd period consumption
to h and h0 and implies dierent M RS for h and h0 is not PO.
Claim 3 Consider a stationary and symmetric OLG environment with endowments given by (w1 ,w2 ): With strictly convex indierence curves, and with
w1 > 0; the unique equilibrium (i.e. autarchy) is PO if and only if MRS
at the endowment point is greater than equal to 1, i.e.
u1 (w1 ; w2 )
u2 (w1 ; w2 )
1:
1 ;w2 )
Proof Suppose uu12 (w
(w1 ;w2 ) < 1: Then, u1 (w1 ; w2 ) < u2 (w1 ; w2 ); and (c1 =
w1 ; c2 = w2 ) is not Pareto optimal. To see this consider the alternative feasible allocation that gives e
c1 = w1 " to the young and e
c2 = w2 "
to the old in every period. This allocation Pareto dominates c1 = w1 and
c1 = w2 for some " 2 (0; w1 ); that makes u1 (e
c1 ; e
c2 ) = u2 (e
c1 ; e
c2 ):
(w1 ;w2 )
> 1; and suppose c1 = w1 and c2 = w2 is not
Suppose now uu21 (w
1 ;w2 )
Pareto optimal. Then, there must exists an alternative feasible allocation
e
c1t and e
c2t that Pareto dominates c1 = w1 and c2 = w2 : That is, for all t
u(e
c1t ; e
c2t )
56
u(w1 ; w2 );
4.4
Introducing a government
Suppose now there is a government that can levy taxes and provide subsidies.
Let
h
h
h
t =
t (t); t (t + 1)
be the taxes or subsidies that agent h of generation t faces in his/her lifetime:
The government budget has be balanced, hence
N (t)
N (t 1)
h
t (t)
h=1
h
t 1 (t)
= 0:
h=1
cht (t + 1)
R(t)
wth (t)
h
t (t)
h
wth (t + 1)
t (t + 1)
:
R(t)
e
ity by choosing the relevant elements of C:
(2) the government budget balances, that is for all t > 1
N (t)
h=1
N (t 1)
eht (t)
h=1
57
eht 1 (t) = 0:
e
lth
N (t)
h=1
t=1
h=1
e
lth = 0:
Example 50 Let [wyt ; wot+1 ] = [2; 1]; 8t and assume the same preference structure as the examples above. Let [ yt ; ot+1 ] = [ 21 ; 12 ]; 8t. Hence, government
taxes young and transfers the proceeds to the old. Then, going trough the problem
of the agents we have
lt = wyt
yt
cyt =
1
(wyt
2
2
1 wot+1
ot+1
=
2
R(t)
2
yt )
1
2
1 + 21
) R(t) = 1:
2R(t)
and
3
:
2
Note that this allocation is Pareto optimal. In contrast, if there were no government
the allocation would not be Pareto optimal. Note also that = 21 is not a random
tax rate for the economy in this example. If we wanted to nd the tax/transfer
rate that maximizes a representative agents lifetime utility, we would solve the
following problem:
max log(wy
) + log(wo + );
cyt = cot+1 =
where we use the fact that autarchy is the competitive equilibrium for this economy. Hence the optima tax rate is given by
1
wy
1
wo +
1
:
2
Now let the government be able to borrow from the young as well (note
that government cannot borrow from the old). Suppose government issues one
period bonds that are sure claims on 1 unit of goods the next period. Then, if
B(t) is the units of bonds sold at time t; at time t + 1 the government needs
B(t) units of time t + 1 goods to be able to pay back his commitments. The
government can achieve this by taxing the young, taxing the old, or by issuing
new bonds at time t + 1:
The government budget is then given by
N (t)
h=1
N (t 1)
h
t (t)
h
t 1 (t)
+ p(t)B(t)
B(t
1) = 0;
h=1
where p(t) is the price of a unit of government bond at time t: The government
budget is in balance if
N (t)
h=1
N (t 1)
h
t (t)
h=1
58
h
t 1 (t)
= 0;
PN (t)
PN (t 1) h
and if h=1 ht (t) + h=1
B(t 1) > 0 and
t 1 (t) < 0; then p(t)B(t)
government needs to issue new bonds, i.e. p(t)B(t) > B(t 1):
Note that with government bonds individual budget constraints become
cht (t) = wth (t)
h
t (t)
lth
p(t)bht ;
and
cht (t + 1) = wth (t)
h
t (t
+ 1) + R(t)lth + bht ;
where bht is the demand for government bonds by agent h of generation t: Note
that an agent demands bht units of government bonds at a unit price of p(t)
when young that will deliver bht units of goods next period. Note also that the
1
:
rate of return on government bonds is p(t)
The lifetime budget constraint for an agent is then given by
cht (t) +
cht (t + 1)
= wth (t)
R(t)
h
t (t)
wth (t)
h
t (t
+ 1)
R(t)
bht p(t)
1
:
R(t)
1
Note an equilibrium exits only if R(t) = p(t)
: Otherwise the return on the
government bond and the return on private lending and borrowing are not the
same and agents can make prots by borrowing from the agents and lending
1
to the government. When R(t) = p(t)
; these arbitrage opportunities are all
exhausted.
1
When R(t) = p(t)
; the budget constraint of the agent h becomes
cht (t) +
cht (t + 1)
= wth (t)
R(t)
h
t (t)
wth (t)
h
t (t
+ 1)
R(t)
Hence, agents simply try to maximize their lifetime utility given their lifetime
resources. They are indierent between lending to the government or lending
to the other agents. All that matters is the total amount that agents want to
bh
t
lend (to the government or to the others), sht = lth + p(t)bht = lth + R(t)
:
In equilibrium,
N (t)
N (t)
cht (t)
h=1
N (t)
wth (t)
h=1
N (t)
N (t)
h
t (t)
h=1
lth
h=1
| {z }
p(t)bht
h=1
=0
and the total amount that agents are willing to lend to government is
N (t)
N (t)
wth (t)
h
t (t)
cht (t) =
h=1
h=1
p(t)bht
| {z }
total lending to
the government
N (t)
e , a sequence of lending/borrowing decisions
location C
seht h=1
; and a
t=1
n
o1
e
sequence of interest rates R(t)
such that
t=1n
o1
e
(1) given (eht 1 (t); eht (t))1
and
R(t)
; consumers maximize their utilt=1
t=1
e
ity by choosing the relevant elements of C:
(2) the government budget constraint holds, that is for all t
N (t)
h=1
N (t 1)
eht (t) +
eht 1 (t) +
h=1
h=1
e
B(t)
e
R(t)
e
B(t
1) = 0;
N (t)
X ebh
e
B(t)
t
+
= 0:
e
e
R(t)
h=1 R(t)
{z
}
e
lth +
et (R(t))
e
S
N (t)
h=1
eht (t) +
h=1
eht 1 (t) +
B(t)
R(t)
B(t
1)
G(t) = 0;
Remark 53 Note again that the goods market equilibrium and government budget constraint implies the loan market equilibrium. The budget constraint of
young agents at time t implies
N (t)
N (t)
cht (t) =
h=1
N (t)
wth (t)
h=1
N (t)
h
t (t)
h=1
sht ;
h=1
N (t 1)
1)
h=1
N (t 1)
sht
1+
h=1
N (t 1)
wth 1 (t)
h=1
h
t 1 (t):
h=1
Then
N (t)
N (t 1)
cht (t)
h=1
h=1
N (t)
N (t)
h=1
wth (t)
h=1
cht 1 (t)
N (t)
h
t (t)
N (t 1)
sht
+ R(t
h=1
1)
h=1
60
N (t 1)
sht 1
h=1
N (t 1)
wth 1 (t)
h=1
h
t 1 (t):
0=
N (t 1)
sht + R(t
1)
h=1
N (t)
sht
h=1
N (t 1)
h
t (t)
h=1
h
t 1 (t):
h=1
N (t 1)
h
t (t)
h=1
h
t 1 (t)
+ p(t)B(t)
B(t
1) = 0;
h=1
we have
N (t)
N (t 1)
h
t (t)
h=1
Then, we have
= p(t)B(t)
B(t
1):
+ p(t)B(t)
B(t
1):
h=1
N (t)
h
t 1 (t)
N (t 1)
sht
= R(t
1)
h=1
sht
h=1
If the initial old and the government at time 0 has no outstanding debt, that is
if,
N (0)
X
sh0 = 0 and B(0) = 0;
h=1
we have
N (1)
sh1 = p(1)B(1);
h=1
N (1)
sh2
= R(1)
h=1
sh1 + p(2)B(2)
B(1)
sh1 + p(2)B(2)
R(1)
h=1
N (1)
= R(1)
N (1)
h=1
sh1
h=1
= p(2)B(2);
and loan markets clear at time 2 as well. Indeed they clear in every period.
Example 54 Suppose that in period 1, the government wishes to borrow 5 units
of time 1 goods and transfer it to the initial old. It will pay o this debt by taxing
the young of generation 2 and will not issue new debt. Let
1
1
2
2
wyt
; wot+1
= [2; 1] ; with Nt1 = 50; and wyt
; wot+1
= [1; 1] ; with Nt2 = 50; 8t:
61
and
uht = chyt chot+1 :
It is straightforward to drive
s1t = 1
1
;
2R(t)
1
2
1
:
2R(t)
and
s2t =
Then,
X
1
2R(t)
sht (R(t)) = 50 1
+ 50
1
2
1
2R(t)
= 75
50
:
R(t)
50
R(1)
B(1)
= 0;
R(1)
| {z }
=5
and
R(1) =
Given r1 ;
s11 = 1
and
s21 =
1
2
5
:
7
1
= 0:3;
2R(1)
1
=
2R(1)
0:2:
Then,
25
:
7
Note that type 1 agents at time 1 want to lend 0.3 each or 15 as a whole. 10
units of this is borrowed by type 2 agents, and the remaining 5 is borrowed by
the government. Each bond costs p1 = 75 units of type 1 goods. Since type 1
agents have more resources when young, they are willing to pay a price higher
than one for a unit of type 2 good to the government.
B(1) = 5R(1) =
Example 55 Let
[wyt ; wot+1 ] = [2; 1] ; with Nt = 100:
and
uht = chyt chot+1 :
Suppose government issues bonds in the rst period, and gives the revenue to
the current old. Moreover, the government wants to raise 50 units of period 1
62
goods. After period 1, government issues new bonds at each period to pay the
outstanding claims. Note that
st = 1 =
Hence,
1
;
2R(t)
50
:
R(t)
st = 100
At t = 1;
100
50
= B(1) = 50:
R(1)
and
R(1) = 1 and B(1) = 50:
Then,
1
1
= :
2
2
Hence, young lends government 12 and each old consumes 12 units extra. Next
period the same situation will be repeated by B(2) = 50; and R(2) = 1; etc. Note
that here government takes place of a permanent borrower, and allows agents to
transfer resources from one period to the next.
s1 = 1
1
Remark 56 Since R(t) = p(t)
; we could model government borrowing slightly
di erently. We could simply state that the government wants to borrow some
amount B(t) at time t; and like any other agent in this economy is willing to
pay an interest rate R(t): Then the government budget constraint would be
N (t)
N (t 1)
h
t (t)
h=1
h
t 1 (t)
+ B(t)
R(t)B(t
1) = 0;
h=1
{z
h2N (t)
St (R(t))
cht (t + 1)
= wth (t)
R(t)
h
t (t)
h
wth (t + 1)
t (t + 1)
:
R(t)
Now consider an alternative tax scheme that agent h faces, [eht (t); eht (t + 1)]: As
long as
h
eht (t + 1)
h
t (t + 1)
h
(t)
+
=
e
(t)
+
;
t
t
R(t)
R(t)
the agentss optimal decision will be the same. This is simply because his/her
budget set doesnt change. Then, we can state the following:
63
e
Proposition 57 Consider a sequential equilibrium, (eht 1 (t); eht (t))1
;
t=1 ; B(t)
t=1
n
o1
e and R(t)
e
e = 0 for all t: Then, alternative taxes and transC
; where B(t)
t=1
eh (t + 1)
bht (t + 1)
= eht (t) + t
;
R(t)
R(t)
N (t)
h=1
N (t 1)
bht (t) +
h=1
bht 1 (t) = 0;
for all t are equivalent. That is allocations and the prices under (bht 1 (t); bht (t))1
t=1
are identical to allocations and prices under (eht 1 (t); eht (t))1
t=1 :
Indeed, we could state the following (which you should try to prove) two
results as well:
e
Proposition 58 Consider a sequential equilibrium, (eht 1 (t); eht (t))1
;
t=1 ; B(t)
t=1
n
o1
e and R(t)
e
C
: These equilibrium allocations can be duplicated with alternat=1
N (t)
h=1
bht (t) +
h=1t 1
bt
1 (t)
= 0; for all t;
b
b
with B(t)
= 0 are equivalent. That is, given (bht 1 (t); bht (t))1
t=1 and B(t) = 0;
e
the equilibrium
o1 allocations and equilibrium interest rate will be identical to C
n
e
:
and R(t)
t=1
e
;
Proposition 59 Consider a sequential equilibrium, (eht 1 (t); eht (t))1
t=1 ; B(t)
t=1
n
o1
e and R(t)
e
C
: Then alternative taxes and transfers (bht 1 (t); bht (t))1
t=1 that
t=1
satisfy
bh (t + 1)
eh (t + 1)
bht (t) + t
= eht (t) + t
for all h and all t,
R(t)
R(t)
are equivalent. That is, corresponding to the alternative taxation patterns, there
is a pattern of government borrowing such that the initial equilibriums consumption allocation and the initial equilibrium gross interest rates constitute an
equilibrium under the alternative taxation pattern.
Note that both of these propositions imply neutrality of government policy.
If taxes are changed in a particular way (i.e. if they change so that each agent
64
faces the same present value of taxes), the government policy has no real eect
(i.e. allocations and the interest rate remain the same). These results are usually
referred as Ricardian Equivalence results. In the next section we will show that
if agents care about others (i.e. they are altruistic), then we can arrive at even
more general neutrality results.
4.5
Intergenerational Linkages
Since government policies we have considered so far redistribute recourses between young and old that are alive at a point in time or between young and
old that are alive in dierent periods, it is very important to ask the following
question: what are the eects of government policies, if generation care about
each others welfare (i.e. they are altruistic)?
To analyze this question, consider the following form of altruism: Generation0 cares about the well-being of generation-1, and generations 1,2,.... are all
selsh. Hence,
uh0 = uh0 ch0 (1); uh1 (ch1 (1); ch1 (2)) ;
where uh1 (ch1 (1); ch1 (2)) is the lifetime utility of generation 1. Hence, generation0 can choose to transfer some of his/her resources to generation-1. Let bh (0)
be the bequest of generation-0 to generation-1. Note that since generation-1 is
selsh, he will simply take bh (0) as an addition to his/her endowments. The
budget constraint faced by generation-0 is now given by
ch0 (1) = woh (1)
h
0 (1)
bh (0);
h
1 (1)
l1h + bh (0);
and
ch1 (2) = w1h (2)
h
1 (2)
+ R(1)l1h :
N (1)
h
0 (1)
h=1
h
1 (1)
= 0;
h=1
b(0) +
+ b(0)
m arginal cost
@u0 @u1
:
@u1 @c1 (1)
|
{z
}
m arginal b enet
65
1 (1); w1 (2));
c1(1)
C
b
w1 (1) 1 (1)
slope = -1
b
w1(1)
not allowed
w0 (1)
c0 (1)
w0 (1) - 0 (1)
Figure 3 shows the trade-o that is faced by generation 0. Given an aftertax endowment point, generation-0 decides how much bequest to leave. We
restrict bequest to be non-negative. Given preferences of generation-0, we can
determine the optimal bequest level. Note that it is possible that bequest motive
is not operative, i.e. b(0) = 0: The bequests will be zero, if
@u0
@u0 @u1
>
:
@co (1)
@u1 @c1 (1)
Now consider changes in taxes and transfers in Figure 3. Note that as long as
taxes and transfer do not change the location of point C, the bequest will be
reduced or increased by generation 0 one to one with taxes. Hence, as long as
bequest motive is operative (i.e. the old choose to leave bequests), the taxes
and transfers that redistribute income between old and young at time 1 has no
eect on real allocations. The taxes and transfers will have real eect only if
they force the old not to leave any bequests.
In Figure 4, the old consume their after tax endowments (point D) and do
not leave any bequests. Indeed they would like to receive transfers from the
young (i.e. choose b(0) < 0); but that is ruled out.
Note that without altruism, only taxes that keep each agents lifetime tax
liabilities constant have no real eect. With altruism government taxes that
transfers resources from the current old to the current young might have no
66
c1(1)
w1 (1) 1 (1)
C
slope = -1
b
w1(1)
w0 (1)
c0 (1)
w0 (1) - 0 (1)
67
real eect, as long as the bequest motive is operative. Furthermore, government borrowing from the current old (which is nanced by taxing the young
in the future) might not have any real eect as well. See Aiyagari (1993) and
Barro (1974) for a detailed discussion of eects of intergenerational linkages on
government policies in OLG models.
4.6
Production
Consider an OLG economy with identical agents who live for two periods. When
young, each agent has 1 unit of labor that he/she supplies inelasticly and earn
wages, wt : Given wt ; each agent decides how much to consume, ct ; and how
much to save, st : When old they rent their savings as capital to the rm which
pays them back 1 + rt+1 for every unit rented. Hence, in this model agents can
store their goods and keep it for the next period as capital. Capital depreciates
at rate 2 (0; 1) after production. Moreover assume that population grows at
rate n, initial population size is N0 ; i.e. Nt = (1 + n)t N0 ; and initial old are
endowed with k1 units of per capita capital at time 1.
Agents discount the future, i.e. consumption when old, at rate : Their
lifetime utility is given by
u(c1t ) + u(c2t+1 );
There is a large number of rms that has access to the aggregate production
technology represented by
Y = F (K; L);
where Y is output, K is capital used and L is labor used: The production
function F is called a neoclassical production function if it has positive rst and
negative second derivative w.r.t. each argument
@F
@2F
@2F
@F
> 0;
> 0;
<
0;
< 0;
@K
@L
@K 2
@L2
it is constant returns to scale (CRS), i.e.
F ( K; L) = F (K; L); for all
> 0;
K!0
L!0
K!1
L!1
Since the production function is CRS, we can assume that there is only one
representative rm. The rms objective is to maximize prots, i.e. rms
problem is
max Y (K; L) wL rK:
L;K
68
@Y
;
@L
and
@Y
;
@K
which determine competitive input prices.
Moreover, since the production function is CRS, the prots are zero. This
results from the Eulers theorem which states that if f (x; y) is homogenous of
degree one, then
@f
@f
x+
y:
f (x; y) =
@x
@y
r=
Finally, since the production function is CRS, we can represent per capita
output as a function of per capita capital stock, since:
Y = F (K; L) = LF (
K
K
; 1) = Lf ( );
L
L
hence
Y
= y = f (k):
L
Therefore, marginal products of capital and labor can be found as
@Y
K 1
= Lf 0 ( ) = f 0 (k);
@K
L L
and
4.6.1
@Y
K
K
K
= f ( ) + Lf 0 ( )(1) 2 = f (k)
@L
L
L
L
kf 0 (k):
Individuals problem
1; solves
= (1
)st + rt+1 st
= (1 + rt+1
)st :
69
)u0 (c2t+1 )
st )
and nd
swt =
@st
=
@wt
Gw t
> 0;
G st
srt =
@st
=
@rt
Gr t
7 0:
G st
and
4.6.2
Firms
Firms try to maximize prots, and labor and capital markets are competitive,
hence we will have the following FOCs
wt = f (kt )
kt f 0 (kt );
and
rt = f 0 (kt ):
70
4.6.3
Determination of kt
The capital stock next period is given by the total savings this period, i.e.
Kt+1 = Nt s(wt ; rt+1 );
or
(1 + n)kt+1 = s(wt ; rt+1 ) = s(wt (kt ); rt+1 (kt+1 );
kt f 0 (kt ); f 0 (kt+1 )]
:
1+n
Since the last equation relates current and next period levels of per capita
capital it determines how this economys per capita capital stock evolves given
any initial value of per capita capital stock, k0 :
kt+1 =
s [f (kt )
sw (k )k f 00 (k )
< 1:
1 + n sr (k )f 00 (k )
)K:
In this case, the rm takes the capital of the old and pays
@F
:
@K
Then sells the production and the undepreciated part of the capital to the consumers. This would not change anything since the decisions of the young and
the old would be the same as above.
R=1+
71
4.6.4
An Example
Consider the following version of the economy outlines above, with n = 0; and
Lt = N for all t. Agents are identical and have the following preferences
u(cyt ; cot+1 ) = log(cyt ) +
log(cot+1 );
wt Lt
Kt ;Lt
rt kt ;
where wt is the wage rate and rt is the user cost of capital. Then, FOCs are
wt = (1
a)AKta Lt
= (1
a)A
Kt
Lt
= (1
a)A
and
rt = aAKta
L1t
a 1
= aA (kt )
Kt
N
= (1
cyt ;cot+1
log(cot+1 )
subject to
cyt + st = wt ;
and
cot+1 = (1 + rt+1
)st :
cot+1
1 + rt+1
= wt:
1
1
=
cot+1
1 + rt+1
72
a)A (kt ) ;
Then,
cot+1 = cyt (1 + rt+1
):
1
wt ) st = wt cyt =
1+
1
1+
wt =
1+
wt :
subject to
co1 = (1 + r1
)k1 ;
+ (1
)Kt ; for t
1;
1:
=
)
1+
1+
wt = kt+1
(1
a)A
kt
lt
= kt+1 ) kt+1 =
1+
(1
a)A (kt ) :
1+
(1
This economy has two steady states where kt+1 = kt = k (see Figure 13):
One is where kt = 0 for all t: Other one is
1
k=
4.7
1+
(1
a)A (k) ) k =
1+
(1
a)A
Introducing Government
73
450
kt+1
k*
k0
kt
k*
4.7.1
A pay-as-you-go social security system simply taxes the young and the transfers
those resources to the current old. If d is payments to the social security system
by the current young and the b is the benets received by the current old, then
the contributions and benets are related by
b = (1 + n)d:
Hence, the problem of an agent born at time becomes
max u(c1t ) + u(c2t+1 );
subject to
c1t + st = wt
d;
and
c2t+1
4.7.2
=
=
(1
)st + rt+1 st + (1 + n)d;
(1 + rt+1
)st + (1 + n)d:
Income Taxes
w )wt ;
and
c2t+1 = st + (1
k )(rt+1
)st ;
k (rt
)st
+ B(t) = Gt + B(t
1)R(t);
where B(t) and B(t 1) are government borrowing at time t and time t + 1;
and R(t) = 1 + rt
: Note that government has to pay young the return they
would get buy holding into their goods and renting to the capital.
4.8
Dynamic E ciency
1
X
[u(c1t ) + u(c2t+1 )] ;
t=1
subject to
(1
)kt + f (kt )
k1
(1 + n)kt+1 + c1t +
c2t
;
(1 + n)
> 0; given.
1 0
u (c1t );
1)
= (1 + f 0 (kt )
for kt :
75
)u0 (c1t );
The rst FOC characterizes the allocation of resources between two people who are alive at time t; while the second FOC characterizes the optimal
accumulation decision.
Note that this problems FOCs imply the following steady state values for
c1 ; c2 ; and k :
u0 (c2 ) = (1 + n) 1 u0 (c1 );
(47)
and
1
+ f 0 (k ) = (1 + n):
(48)
The last equation determines the steady state value of per capita capital stock:
f 0 (k ) = n + :
This relation is called the golden rule of capital accumulation and characterizes
the e cient steady state capital stock.
Note that in this steady state
(1
where c = c1 +
c2
(1+n) :
)k + f (k ) = k + nk + c ;
Hence,
f (k )
nk
k =c ;
and the capital stock that maximize the consumption of a representative agent
at the steady state is given by
dc
= f 0 (k )
dk
= 0:
Therefore,
dc
= f 0 (k ) n
? 0 () f 0 (k ) ? n + ;
dk
hence, if f 0 (k ) > n+ ; i.e. capital stock exceeds the golden rule level, a decrease
in capital stock will increase the steady state level of per capita consumption.
Note that if f 0 (k ) < n + ; the economy is over accumulating capital so that
technology does not return what is necessary to keep the per capita capital stock
constant see Abel et. al. (1989) for further discussions.
References
[1] Abel, Andrew B., Mankiw, N. Gregory, Summers, Lawrence, and Zechauser,
R. J. Assessing Dynamic E ciency: Theory and Evidence, Review of
Economic Studies, 56, pages 1-20, 1989.
[2] Aiyagari, Rao S. Intergenerational Linkages and Government Budget Policies. Federal Reserve Bank of Minneapolis Quarterly Review (Spring
1987): 14-23.
76
[3] Barro, Robert (1974), Are Government Bonds Net Wealth? Journal of
Political Economy, 82, 1095-1117.
[4] Diamond, Douglas (1965), National Dept in a Neo-Classical Growth
Model, American Economic Review, 55, 1126-1150.
[5] Kehoe, T. Intertemporal General Equilibrium Models, in The Economies
of Missing Markets, Information, and Games, Frank Hahn (ed.), 1989.
[6] Samuelson, P. A. (1958), An Exact Consumption-Loan Model of Interest
with or without the Social Contrivance of Money, Journal of Political
Economy, 66, 467-82.
77
Dynamic Programming
1
X
fxt+1 g1
t=0 t=0
F (xt ; xt+1 );
(SP)
subject to
xt+1 2 (xt ); t = 0; 1; 2; :::;
and
x0 given.
In this set-up, time is discrete and the horizon is innite, t = 0; 1; 2:::: At
time-0, the economy starts with x0 : Given x0 ; the agent chooses x1 from a
feasible set (x0 ): The x0 together with the choice of x1 determine the current
return, denoted by F (x0 ; x1 ): At time-1, the economy starts with x1 and the
agent chooses x2 ; etc. The future is discounted at rate 2 (0; 1):
1
P1Thetproblem is nding an innite feasible sequence fxt+1 gt=0 that maximizes
F (xt ; xt+1 ): In this part of the class we will try to nd and charactert=0
ize solutions to these sequential problems. A sequential problem (SP) will be
characterized by three objects:
A set X such that xt 2 X for all t:
A set of feasible actions that assigns for each x 2 X a subset of X,
X ! X:
A return function that maps any two elements from X into the real line,
F : X X ! R:
5.1
)b
k; n
e+(1
F (e
k; n
e)+(1
)b
n)
)F (b
k; n
b) for all e
k; n
e; b
k; n
b > 0 and
> 0:
k!0
k!1
The output together with the undepreciated part of the current capital can
be used for consumption, denoted by ct ; or be kept as future capital stock,
denoted by kt+1 : Therefore,
ct + kt+1
where
F (kt ; nt ) + (1
)kt ;
)kt ;
where u(ct ) is strictly increasing and strictly concave, with limc!0 u0 (c) = 1:
Hence, the problem of a representative agent is
max
1
X
u(ct );
t=0
subject to
ct + kt+1
k0
ct
f (kt );
> 0 is given,
0; kt+1 0
and 1
nt
0:
2 (0; 1);
Since agents do not drive any utility from leisure, they will supply all of
their labor endowment, i.e. nt = 1 for all t: Therefore, f (kt ) = F (kt ; 1) +
(1
)kt :
Since the utility function is strictly increasing, it must be the case that
ct + kt+1 = f (kt ):
Then, we can write this problem as:
max1
fkt+1 gt=0
1
X
u(f (kt )
kt+1 );
t=0
subject to
k0 > 0 is given, and f (kt )
kt+1
0;
kt+1 );
and
(xt ) = (kt ) = [0; f (kt )]:
Remark 65 What about the set X?
How can we solve this problem? In order to gain some insight, rst consider
a nite-time-horizon version of this problem, given by:
max1
fkt+1 gt=0
T
X
u(f (kt )
kt+1 );
t=0
subject to
k0 > 0 is given, and f (kt )
kt+1
0:
T
X
[u(f (kt )
kt+1 ) +
t kt+1 ] ;
t=0
where
0:
Remark 66 Note that in this problem it will never be optimal to set kt+1 =
f (kt ):
80
The rst order conditions for this problem are given by:
@L
=
@kt+1
and
t 0
u (f (kt )
kt+1 ) +
@L
=
@kT +1
u0 (f (kT )
t+1 0
u (f (kt+1 )
kT +1 ) +
kt+2 )f 0 (kt+1 ) = 0;
= 0:
kt+1
0; t = 1; :::; T;
0; t = 1; :::; T;
0; t = 1; :::; T:
2 (0; 1):
kt+11
:
ct+1
81
Hence,
ct+1 =
kt+11 ct ;
kt+2 ) =
kt+11 (kt
or
(kt+1
Note that at time T
kt+1 ):
kT +1 ) =
kT
Since kT +1 = 0;
kT =
kT
(kT
(kT
kT ):
kT );
fkt+1 gt=0
1
X
F (kt ; kt+1 );
t=0
subject to
k0 > 0 given; and kt+1
0:
Then, kt+1
1
t=0
Remark 69 Note that for one sector growth model where F (kt ; kt+1 ) = u(f (kt )
kt+1 ); the second condition implies
u0 (f (kt )
kt+1 )( 1) + u0 (f (kt+1 )
kt+2 )f 0 (kt+1 ) = 0;
or
u0 (f (kt )
kt+1 ) = u0 (f (kt+1 )
82
kt+2 )f 0 (kt+1 ):
n
o1
Proof : Consider an alternative feasible sequence e
kt+1
: We want to
t=0
show that
T
h
i
X
t
F (kt ; kt+1 ) F (e
D = lim
kt ; e
kt+1 )
0:
T !1
t=0
Since F is concave,
D
lim
T !1
lim
T !1
T
X1
t=0
T
X
t=0
F (kt ; kt+1 )
i
F (e
kt ; e
kt+1 )
e
kt ) + F2 (kt ; kt+1 )(kt+1
Note that k0 = e
k0 = k0 > 0 is given. Then,
3
2
T
X 6
7
t
D
lim f
4F2 (kt ; kt+1 ) + F1 (kt+1 ; kt+2 )5 (kt+1
T !1
{z
}
|
t=0
=0
e
k0 ) +
}
i
e
kt+1 ) :
e
kt+1 )
e
kT +1 )g:
F2 (kT ; kT +1 )(kT +1
F1 (kT ; kT +1 )(kT
lim
F1 (kT ; kT +1 )kT = 0;
T !1
e
kT )
lim
T !1
)kt
kt+1 );
83
5.2
fkt+1 gt=0
1
X
u(f (kt )
kt+1 );
t=0
subject to
k0 > 0 is given, and f (kt )
kt+1
0:
1
fkt+1 gt=0
T
X
ln(kt
t=0
Suppose now we are at period T 1 and have some capital stock k: Then the
only problem is to choose tomorrows capital stock. Lets denote it with k 0 : Since
I exactly know what I will do with k 0 tomorrow, the problem of choosing k 0 is
simply
fln(k k 0 ) + ln(k 0 )g :
max
0
k
k0
1
;
k0
k
:
1+
k0 =
We will call this period T
1 policy function:
gT
1 (k)
k
:
1+
It tells us what we will do (i.e. how much we will save if we enter the period
with a capital stock k): Once we know this we can also calculate the maximized
value function for period T 1
VT
=
=
k
)+
1+
k
)
1+
1
(1 + ) ln(k) + ln(
) + ln(
):
1+
1+
ln(k
ln(
It tells us the highest utility level we can reach if we enter the period T 1 with
k: Once we know VT 1 ; however, gT 2 and VT 2 can be determined by
VT
= max
fln(k
0
k
84
k 0 ) + VT
1 (k
)g
Indeed for any period t = 0; ::::; T 1; the problem boils down to nding
value functions and associated policy functions that satisfy
k 0 ) + Vt+1 (k 0 )g :
Vt = max
fln(k
0
k
The key feature of the dynamic programming approach is to split the problem
into several problems that involve today and tomorrow.
Remark 72 Note that VT (k) = ln(k) and gT (k) = 0 are determined trivially.
Remark 73 The dynamic programming approach focuses on nding the policy
functions and the value functions rather than nding the sequences.
Remark 74 As the following example shows a pattern often emerges in Vt and
gt that allows us to use an induction argument.
Example 75 Consider the following problem
max1
fkt+1 gt=0
1
X
ln(ct );
t=0
subject to
k0
kt+1
and kt+1
> 0 is given,
= (kt ct )R; with R
0 and ct 0:
1;
How should we write this as a DP problem? Note that it makes sense to write
the value and policy functions as functions of k: If the agent knows k at period
t; then he/she can determine k 0 (the next periods asset). Again lets start from
period T 1: We know that VT = ln(k): Hence,
VT
= max
ln(k 0
0
k
k0
) + ln(k 0 ) :
R
Rk
For t = T
1
=) gT
k0
Rk
and VT
2
k
= 2 ln( ) + ln(R):
2
2; we have
VT
= max
ln(k
0
k
k0
k
) + 2 ln( ) + ln(R) :
R
2
k0
2
=) gT
k0
2Rk
and VT
3
85
k
= 3 ln( ) + 3 ln(R):
3
Vt = (T
t + 1) ln(
X
k
)+
i ln(R);
t+1
i=1
and
T t
Rk:
T t+1
Suppose now for 2 ft + 1; :::; T g the value and policy functions are given by
these equations. Then,
(
)
TX
t 1
k0
k
Vt = max
ln(k
) + (T t) ln(
)+
i ln(R) ;
k0
R
T t
i=1
gt =
Rk
= (T
t)
1
(T t)
or k 0 =
Rk;
k0
(T t + 1)
and
Vt
ln(k
ln(
(T
(T t)
k) + (T
(T t + 1)
k
) + (T
t + 1)
t) ln(
t) ln(
(T
(T
t + 1) ln(
(T
TX
t 1
1
Rk) +
i ln(R);
t + 1)
i=1
k
) + (T
t + 1)
t) ln(R) +
TX
t 1
i ln(R)
i=1
X
k
)+
i ln(R);
t+1
i=1
max1
fkt+1 gt=0
1
X
u(f (kt )
kt+1 );
t=0
kt+1
0:
The maximized value function is the highest level of discounted utility we can get
starting with some initial level of capital and follow the best possible sequence
of actions. It is a function that maps the set of capital stocks into the real line.
86
fkt+1 g1
t=0
max
1
X
u(f (kt )
kt+1 )
t=0
0 k1 f (k0 )
fu(f (k0 )
"
k1 ) +
max
kt kt+1 f (kt )
1
X
t 1
u(f (kt )
t=1
kt+1 ) g:
Now if V (k0 ) is the maximized value function, then it also gives us the maximum
value starting with k1 :
Then, we have
V (k0 ) =
max
0 k1 f (k0 )
[u(f (k0 )
k1 ) + V (k1 )] :
But time does not play any particular role here other than indicating today vs.
tomorrow.
Hence, if V is the maximized value function, it should satisfy
=
V (k)
| {z }
max
0 k0 f (k)
[u(f (k)
k 0 ) + V (k 0 )] ;
value function
= arg
p olicy function
max
k0 2[0;f (k)]
[u(f (k)
k 0 ) + V (k 0 )] :
Once, we know g(k); we can start from k0 ; and characterize the optimal path.
Hence, the dynamic programing approach focuses on nding functions such as
V and g; rather than nding sequences. Of course the question is: how do we
nd V ?
Sometimes a guess and verify approach works as the following example
demonstrates:
Example 76 Let
u(c) = ln(c);
and
f (k) = k :
Then, we can write the following dynamic programming problem
V (k) =
max [ln(k
0 k0 k
87
k 0 ) + V (k 0 )] :
(49)
max [ln(k
0 k0 k
(A + B ln(k 0 ))] :
Note that once we have a functional form for V; this maximization problem is
trivial. Now we can nd the FOC for k 0
1
k0
B
:
k0
Hence,
k 0 = Bk
k 0 B;
or
Bk
:
(50)
1+ B
If V (k) is the true value function that solves equation (49) then if we follow g(k)
dened in equation (50) we should get back V (k); i.e.
k 0 = g(k) =
Bk
)+
1+ B
A + B ln(k) = ln(k
A + B ln(
Bk
)
1+ B
1
1
1
ln(
) + ln (1
) :
Bk
=
1+ B
k :
Remark 77 Note that this policy rule satises the transversality condition,
since for this problem
lim
t!1
=
=
=
=
lim
lim
lim
lim
1
(kt )
t!1
a 1
kt+1
1
a 1
(kt )
t!1
t!1
88
(kt )
1
(kt )
t!1
kt
(kt )
(kt )
a
(kt )
= 0:
(kt )
kt
Remark 78 Check Exercise 2:3 in Stokey, Lucas with Prescott (1989) to see
how you can arrive the guess for V using the policy rule for a nite horizon
problem.
It is obvious that the cases where we can have a nice guess that will turn
out to be true V are rather limited. What can we do, if we have no idea what
V is?
We can still hope that starting from some initial guess V 0 (k) might help.
For this problem, for example, let
V 0 (k) = 0:
Then, we can nd a new function for V; call it V 1 (k); by solving
V 1 (k) =
k 0 ) + V 0 (k 0 ) :
ln(k
max
0 k0 k
Again once we have V 0 (k) = 0; solving the maximization problem in the right
hand side is trivial. Since V 0 (k) = 0; the solution is k 0 = 0: Then we have
V 1 (k) = ln(k ) =
ln(k):
max
ln(k
0 k0 k
k 0 ) + V 1 (k 0 ) :
Again, once we have a guess for V on the right hand side, we can write the FOC
for k 0 as
1
=
k
k0
k0
to arrive at
k
k0 =
;
1+
and
V 2 (k) = ln k
k
1+
ln
k
1+
max [ln(k
0 k0 k
89
k 0 ) + V (k 0 )] ;
max [ln(k
0 k0 k
k 0 ) + V (k 0 )];
{z
}
T
1
X
F (xt ; xt+1 );
(SP)
t=0
subject to
xt+1 2 (xt ); t = 0; 1; 2; :::;
and
x0 given.
Now we are interested in solving the following functional equation (FE) to
nd V;
V (x) = sup [F (x; y) + V (y)] ;
(FE)
y2 (x)
Mathematical Preliminaries
This section is based on Stokey, Lucas with Prescott (1989), Sundaram (1996),
Rudin (1976). We will proceed in following steps:
We will rst dene a set of objects that we can compare with each other
in a meaningful way.
This will allow us to dene convergence.
Then we will focus on sequences of functions and dene dierent convergence notions for sequences of functions.
We will then look at operators, such as the operator T we dened above; that
generate sequence of functions that converge and T V preserves nice properties of function V .
6.1
Denition 79 A real vector space (or linear space) is a set of elements (vectors)
X together with two operations, addition and scalar multiplication (which are
dened as x + y 2 X and ax 2 X for any two vectors x; y 2 X and for any real
number a) such that for all x; y; z 2 X and a; b 2 R
x+y
(x + y) + z
a(x + y)
(a + b)x
(ab)x
9
1x
=
=
=
=
=
2
=
y + x;
x + (y + z);
ax + ay;
ax + bx;
a(bx);
X; x + = x and 0x = ;
x:
Denition 80 A metric space is a set S; together with a metric (distance function) : S S ! R; such that for all x; y; z 2 S :
(i) (x; y)
0; with
equality if and only if x = y; (ii) (x; y) = (y; x); (iii) (x; z)
(x; y)+ (y; z):
Example 81 The metric space (S; ); S = R2 ; the set of real numbers, and
(x; y) = [(x1 y1 )2 + (x2 y2 )2 ]1=2 ; for all x and y in S is a metric space.
Example 82 The metric space (S; ); where S = R; the set of real numbers,
and (x; y) = jx yj ; for all x and y in S is a metric space.
For vector spaces metrics are dened in a way that measures the distance
between two pints as the distance of their dierences from the zero pint
Denition 83 A normed vector space is a vector space S, together with a norm
k:k : S ! R; such that for all x; y 2 S and a 2 R;
(i) kxk 0; and kxk = 0
i x = ; (ii) kaxk = jaj : kxk ; (iii) kx + yk kxk + kyk :
91
yk :
y(t)j :
max jx(t)
y(t)j = jx(t )
a t b
y(t )j
0;
a t b
a t b
a t b
and
(iii) (x; z)
(x; z)
max jx(t)
max jx(t)
z(t)j = jx(t )
a t b
jx(t )
a t b
y(t )j + jy(t )
z(t )j
z(t )j
y(t)j
t2A
6.1.1
Denition 87 Let A 6= ;
U (A) = fu 2 R j u
a; for all a 2 Ag ;
a; for all a 2 Ag :
6.2
N" .
Let
x1 (t) =
t
2t
nt
; x2 (t) =
; ::::; xn (t) =
; :::::
1+t
2+t
n+t
nt
n+t :
M for
1 if k is odd
= f1; 1; 1; 2; 1; 3; 4; 1:::::g ;
if k is even
k
2
where lim sup xk = 1; and lim inf xk = 1: Note that lim sup and lim inf
are themselves limit points of a sequence. Therefore, in order to show that a
sequence converges we can use the following theorem.
93
1.8
1.6
1.4
n= 10
1.2
0.8
0.6
n= 1
0.4
1
1.1
1.2
1.3
1.4
1.5
94
1.6
nt
n+t
1.7
1.8
1.9
Denition 98 A sequence fxn gn=0 in S is a Cauchy sequence if for each " > 0;
there exists N" such that
(xn , xm ) < " for all n; m
N" :
Lets show that (X; ) is a metric space and is not complete. Since j x(t) y(t) j
0;
Z b
(i) (x; y) =
j x(t) y(t) j dt 0:
a
y(t) j= 0; and
Z b
(ii) (x; y) =
j x(t)
a
y(t) j dt = 0:
On the other hand if (x; y) = 0; then x = y: In order to show this let (x; y) = 0;
but x 6= y:Then there must exist a t0 such that x(t0 ) y(t0 ) = " > 0: Since x is
a continuous function, 9 x such that j t0 t j < x implies j x(t0 ) x(t) j< 4" :
We can dene y similarly. Let = minf x ; y g; then for all t such thatj t0 t j
< ; x(t) y(t) > 2" : This follows from
" <j x(t0 )
y(t0 ) j j x(t0 )
x(t) j + j x(t)
y(t) j + j y(t)
; t0 + ]: Since t0 2 S;
Z
"
j x(t) y(t) j dt >
> 0:
2
S
95
y(t0 ) j :
Note that
(x; y) =
j x(t)
y(t) j dt
j x(t)
y(t) j dt > 0;
follows from the triangular inequality for the absolute value. To show that C(a; b)
is not complete, consider fn (t) = tn for t 2 [0; 1]: This sequence converges to
0 if t 2 [0; 1)
1if t = 1;
fn ! f =
which is not an element of C(0; 1): It is easy to show however that this sequence
is Cauchy, since
(fn; fm ) =
(tn
tm+1
m+1
tn+1
n+1
tm )dt =
=
0
1
n+1
1
;
m+1
max jx(t)
y(t)j = jx(t )
a t b
y(t )j
0;
a t b
a t b
a t b
and
(iii) (x; z)
(x; z)
max jx(t)
max jx(t)
z(t)j = jx(t )
a t b
jx(t )
a t b
y(t )j + jy(t )
96
z(t )j
z(t )j
Example 103 (Stokey and Lucas, 1989, Exercise 3.6c, page 47) The metric space (S; ) in previous example is not complete. Consider the following
example of a Cauchy sequence in S that converges to a point that is not in S;
xn =
t
n
1
n=1
Each element of this sequence is a continuous and strictly increasing on [a; b]:
Hence, fxn g is contained in S: This is also a Cauchy sequence since,
(x; y) = max jxn (t)
a t b
xm (t)j = max j
a t b
t
n
t
1
j=j
m
n
1
j max jtj
m a t b
1
1
j j + j j max jtj;
n
m a t b
which can be made arbitrarily small by picking n and m large enough. Limit of
this sequence of functions, however, is not in S since,
xn =
t
n
! 0;
which is not strictly increasing. Hence, not all Cauchy sequences in S converges
to a limit in S; therefore the metric space (S; ) is not complete.
Theorem 104 R is complete.
Theorem 105 Rn is compete.
6.3
6.4
Functions
S (x; p)
< :
6.5
Sequences of Functions
6.5.1
Pointwise Convergence
Denition 112 Let (S; ) be a metric space. A sequence fxn gn=0 in S converges to x 2 S; if for each " > 0; there exists N" such that
(xn , x) < " for all n
N" .
One problem with this denition was that we need a candidate limit point x
to check if the sequence converges. When we have a sequence of functions, such
a candidate naturally arises.
Denition 113 Let ffn g be a sequence of functions dened on a set E
S;
and suppose that the sequence of numbers ffn (x)g converges for every x 2 E:
We can then dene a function by,
f (x) = lim fn (x);
n!1
x 2 E:
(51)
nx
;
n+x
x 2 [1; 2]:
3
3n
3
; f2 ( ) =
! :
2
2n + 3
2
Indeed,
f (x) = lim fn (x) = x;
n!1
98
x 2 [1; 2]:
Given that ffn g convergence pointwise to f (x) = x; we can use this limiting
function as our candidate and check if the sequence converges to this function
given a particular metric. Let,
(f; g) = max jf (x)
g(x)j :
x2[1;2]
Then,
(fn ; f ) = max
x2[1;2]
nx
n+x
x = max
x2[1;2]
x2
x2
4
< max
= :
n+x
n
x2[1;2] n
g(x) j :
x 2 [0; 1]:
Then,
f1 (x); f2 (x); f3 (x);
but,
f1 (x); f2 (x); f3 (x);
! 1; for x = 1:
Hence,
f (x) = lim fn (x) =
n!1
0 if x < 1
;
1 if x = 1
nx
;
1 + n2 x2
99
x 2 [0; 1]:
0.9
0.8
0.7
0.6
n= 1
0.5
0.4
0.3
0.2
n= 10
0.1
0
0
0.1
0.2
0.3
0.4
0.5
100
0.6
0.7
0.8
0.9
In this case,
f1 (x); f2 (x); f3 (x);
Yet,
(fn ; f ) = max
x2[0;1]
! 0; for all:
nx
1 + n2 x2
1
:
2
0 =
x b
Uniform Convergence
Denition 118 We say that a sequence of functions ffn g, n = 1; 2; 3; : : : converges uniformly on E to a function f , if for every " > 0; there exists an integer
N such that n N implies
jfn (x)
f (x)j
fm (x)j
":
Proof. Suppose ffn g converges uniformly on E; and let f be the limit function.
Then there must exist an integer N such that n N; and x 2 E implies
jfn (x)
so that if m
N; n
jfn (x)
f (x)j
"
;
2
N; and x 2 E; then
fm (x)j
jfn (x)
f (x)j + jf (x)
fm (x)j
":
Conversely, suppose the Cauchy condition holds. Then for every x; ffn (x)g
converges to a limit point that we may call f (x): Thus, the sequence ffn g converges pointwise to f on E: We need to show that convergence is also uniform.
Let " > 0 be given, and choose N such that the Cauchy condition holds. Fix n;
and let m ! 1 in
jfn (x) fm (x)j ":
Since fm (x) ! f (x) as m ! 1; this gives
jfn (x)
for every n
f (x)j
";
f (y)j
jf (x)
f (y)j
yk <
f (y)j :
fn (z)j <
"
;
3
=) jfN (x)
fN (y)j <
"
:
3
yk < ; we have,
f (y)j
Note that previous two Theorems are building blocks in showing that C(X);
the set of bounded and continuous functions on set X
Rl ; with (f; g) =
supx2X j f (x) g(x) j, is complete [Theorem 3.1 in page 47 in Stokey and
Lucas with Prescott (1989)].
The following theorem provides another characterization of uniform convergence.
Theorem 121 Suppose,
lim fn (x) = f (x) for x 2 E;
n!1
Let
Mn = sup jfn (x)
f (x)j :
x2E
nx; for x
1; for x >
1
n
1
n
0; for x = 0
:
1; for x > 0
1
;
1 + nx
x 2 (0; 1):
Note that fn (x) ! 0 monotonically in (0; 1); yet convergence is not uniform
since,
1
0 = 1:
(fn ; f ) = sup
x2(0;1) 1 + nx
Theorem 125 Let X
Rn ; and let C(X) be the set of bounded continuous
functions f : X ! R with sup metric. Then C(X) is a complete metric space.
Proof. See Stokey and Lucas with Prescott (1989) Theorem 3.1.
Hence, if we have a sequence of functions in C(X) and the sequence satises
the Cauchy criteria, then the limiting function is also C(X):
6.6
Contraction Mappings
103
(T Vn+1 ; Vn )
(V1 ; Vo ); n = 0; 1; 2; :::
(V1 ; Vo )
(T V; T n V0 ) + (T n V0 ; V )
(V; T n 1 V0 ) + (T n V0 ; V ):
As n ! 1; (T V; V ) ! 0; hence T V = V:
To show that V is unique. Suppose,9Vb 2 S; such that T Vb = Vb and V 6= Vb ;
then
0 < (Vb ; V ) = (T Vb ; T V )
(Vb ; V );
a contradiction. Finally, for any n
(T n V0 ; V ) = (T (T n
1:
V0 ); T V )
(T n
V0 ; V ):
Proof. See Stokey and Lucas with Prescott (1989) Theorem 3.3.
Consider the operator for the one sector growth model
(T V )(k) =
Let W (k)
0 k0 k
k 0 ) + V (k 0 )] :
V (k); then
(T W )(k)
k 0 ) + W (k 0 )]
k 0 ) + V (k 0 )] = (T V )(k);
0 k0 k
0 k0 k
and
T (V + a)(x)
=
=
=
k 0 ) + V (k 0 ) + a]
k 0 ) + V (k 0 )] + a:
0 k0 k
0 k0 k
(T V )(x) + a:
Exercise 130 Consider the mapping A of n-dimensional space into itself given
by the system of linear equations
y = A(x);
where
yi =
n
X
aij xj + bi ; i = 1; : : : n:
j=1
105
If A is a contraction mapping we can use the method of successive approximations to solve the equation Ax = x: Given
(x; y) = max jxi
yi j ;
1 i n
Ayj = jA(x
y)j = j
max jxj
n
X
j=1
n
X
yj j
1 j n
aij (xj
j=1
yj )j
n
X
j=1
jaij jjxj
yj j
jaij j:
Then,
(Ax; Ay)
max jAxi
Ayi j
1 i n
yi j @ max
max jxi
1 i n
= @ max
1 i n
If
max
1 i n
n
X
j=1
n
X
j=1
1 i n
n
X
j=1
jaij jA
jaij j
k < 1;
a
1
x+
:
2
x
Show that if a 2 (1; 3); then f is a contraction. Find the xed point of f as a
function of a:One needs to check two things: a) f maps X onto X and b) for all
x and x0 from X (f (x); f (x0 )) k (x; x0 ) for some k 2 (0; 1). To check that
f maps X onto X, observe that f 00 (x) = 14 xa3 > 0 for all x 2 X and a 2 (1; 3).
Thus, f is strictly convex which means that it has a unique minimum.pTo nd
that minimum,psolve f 0 (x) = 12 1 xa2 = 0. The solution is x = a > 1.
Then f (x ) = a > 1, which implies that for all a 2 (1; 3), f : X ! X. To
verify that f (x) is actually a contraction, consider
(f (x); f (x0 )) =
1
a
x+
2
x
1 0
a
x + 0
2
x
1
1
2
a
xx0
jx
x0 j :
a
It su ces to show that a 2 (1; 3) implies that 12 1 xx
< 1 for all x; x0 2 X.
0
1
a
For any xed a, the function ga (z)
2 1
z , z 2 X, is decreasing on (1; a),
106
lim ga (z) =
z!+1
1
2.
Therefore, it is su cient to consider x and x0 such that xx0 2 (1; a). Then,
1
a
< 12 (1 a) and 12 (1 a) < 1 if a 2 (1; 3).
0
2 1
xx0
6.7
Correspondences
(FE)
y2 (x)
where x 2 X R is the beginning of period state variable, y 2 X is the endof-period state variable (or control variable) to be chosen, and F (x; y) is the
current period return function.
Correspondences are used to denote the relationship between the current
state variable, x, and the choice variable, y: A feasibility correspondence, :
X ! X; is used to dene which values of y are feasible given x: We would like to
know how (x) behaves as x changes over X in order to be able to characterize
how the maximizing values of y and value function V (x) behaves over X. Hence,
we need to introduce a notion of continuity for correspondences.
Denition 133
: X ! Y is a compact-valued correspondence if
compact set of Y for each x 2 X:
(x) is a
Denition 134
: X ! Y is a closed-valued correspondence if
closed set of Y for each x 2 X:
(x) is a
Denition 135
: X ! Y is a convex-valued correspondence if
convex set of Y for each x 2 X:
(x) is a
xg ; where X
R+ ; and Y
R+ :
Denition 137
set.
Denition 138
set.
Note that a closed-graph correspondence is also closed-valued, and a convexgraph correspondence is also convex-valued. The converses, however, do not
hold.
6.7.1
Lower Hemi-Continuity:
(x)
{xn}
108
6.7.2
Upper Hemi-Continuity:
Denition 140 A compact valued correspondence : X ! Y is upper hemicontinuous (u.h.c) at x if (x) is non-empty and if, for every sequence xn ! x
and every sequence fyn g such that yn 2 (xn ) for all n; there exists a convergent
subsequence of fyn g whose limit point y is in (x):
Note that in order to check u.h.c. of a correspondence at x; we rst pick
xn ! x and a sequence yn contained in the images of xn : Then, we look for a
convergent subsequence of yn which converges to a point y in the image of x:
Upper hemi-continuity will fail if there is a sudden collapsein the correspondence. Then, we could pick xn and yn , but fail to nd a point y in the image
of x such that a subsequence of y converges to that point. The correspondence
in Figure 17 is not u.h.c. at x:
y
{yn}
(x)
{xn}
6.8
: X ! Y is continuous at x 2 X if it is
109
T V is the maximized value function, and therefore if F (x; y)+ V (y) is continuos
and (x) is compact-value and continuos the T V is also a continuos function.
Furthermore, if F (x; y) + V (y) is continuos and strictly concave and (x) is
compact valued, continuos, and convex valued, then there is a unique maximizer
y and hence G(x) = g(x) is a function.
References
[1] Rudin, W. Principles of Mathematical Analysis, McGraw-Hill, 1974.
[2] Sundaram, R. K. A First Course in Optimization Theory, Cambridge University Press, 1996.
[3] Stokey, N. L. and Lucas, R. E. with Prescott, E. Recursive Methods in
Economic Dynamics, Harvard University Press, 1989.
111
Principal of Optimality
1
X
fxt+1 g1
t=0 t=0
F (xt ; xt+1 );
(SP)
subject to
xt+1 2 (xt ); t = 0; 1; 2; :::;
and
x0 given.
We then argued that the supremum function dened by
V (x0 ) =
1
X
sup
fxt+1 g1
t=0 t=0
F (xt ; xt+1 );
(FE)
y2 (x)
The supremum function V (x0 ) tells us the innite discounted value of fol1
lowing the best sequence fxt+1 gt=0 : Our hope was that rather than nding the
1
best sequence xt+1 t=0 ; we can try to nd the function V (x0 ) as a solution
to the FE. If our conjecture is correct, then the function V that solves FE would
give us the supremum function, i.e. V (x0 ) = V (x0 ):
We will now try to show that:
The supremum function V
A sequence fxt+1 gt=0 attains the supremum in (SP) if and only if it satises
V (xt ) = F (xt ; xt+1 ) + V (xt+1 ); t = 0; 1; 2; 3::::
There is indeed a solution to FE.
In order to prove these claims, lets rst introduce some notation. Let X; be
the possible values of the state. We will let : X ! X denote the feasibility
correspondence. The graph of the feasibility correspondence is then given by
A = f(x; y) 2 X X : y 2 (x)g : We will let F : A ! R be the return function.
Note that F indicates the momentary return of being in a particular state x 2 X;
112
and making a particular choice y out of feasible values (x): Finally we will let
0 be the discount factor.
These are the givens in this problem. In order to gain some better insight,
we will use the one sector growth model as an example.
Example 144 (Stokey, Lucas, and Prescott (1989), Exercise 5.1, p. 103) Consider the dynamic programming problem for the one sector growth model,
V (x) =
max
0 y f (x)
fU [f (x)
y] + V (y)g ;
< 1;
U 2 : U is continuous,
U 3 : U is strictly increasing,
U 4 : U is strictly concave,
U 5 : U is continuously di erentiable.
T 1 : f is continuous,
T 2 : f (0) = 0; for some x > 0; x
f (x) < x for all x > x;
f (x)
x for all 0
x; and
T 3 : f is strictly increasing,
T 4 : f is weakly concave,
T 5 : f is continuously di erentiable.
Note that in this example, we can pick X = [0; x] where x is the highest
maintainable capital stock (see Figure 18). F (x; y) = U [f (x) y]; and (x) =
[0; f (x)]:
We will sometimes use the following notation to analyze the one-sector
growth model
V (k) = max
fU [f (k) k 0 ] + V (k 0 )g ;
0
0 k
f (k)
or
V (kt ) =
max
0 kt+1 f (kt )
fU [f (kt )
113
kt+1 ] + V (kt+1 )g :
y=x
y=F(x,1)
xupper
Note that nding a solution to the SP is simply nding the best feasible plan
(or plans if there is more than one). In order to be able to say anything about
the solutions to the SP, we will need two assumptions:
A1:
lim
n!1
n
X
t=0
1):
Hence, we want to make sure that i) (x0 ) is not empty, i.e. there is some
feasible plan, ii) we can evaluate (say how good or bad it is) any feasible plan.
For each n = 0; 1; 2; :::: we will dene un : (x0 ) ! R by
un (x) =
~
n
X
F (xt ; xt+1 );
t=0
114
which simply gives us the discounted return from following a feasible plan x
~
R [ f 1g,
u(x) = lim un (x);
n!1
to be the innite discounted sum of returns from following feasible plan x: Then,
~
sup u(x):
~
x2 (x0 )
~
sup
1
X
x2 (x0 ) t=0
1
X
jF (xt ; xt+1 )j
B=
t=0
B
1
(SP1)
(SP2)
Hence, if jV (x0 )j < 1, then it must give more utility than any other feasible
plan.
We want to show that V (x0 ) satises the FE. We have to make more clear
what we mean by that. We will say that V satises the FE, if
V (x0 )
(FE1)
(FE2)
We will rst prove that under A1 and A2x the supremum function V satises FE. We will rst prove, however, a Lemma that is key. This Lemma simply
tells us that you can separate discounted innite sum of returns from any feasible plan into current and future returns. This separation is key in dynamic
programming.
115
Proof: Remember
u(x) = lim
~
n!1
n
X
F (xt ; xt+1 ) .
t=0
Then,
u(x)
= F (x0 ; x1 ) +
lim
n!1
n
X
F (xt+1 ; xt+2 )
t=0
= F (x0 ; x1 ) + u(x0 ):
~
Note that for our one sector growth model example, this lemma simply states
that
1
X
t=0
1
X
U (f (xt+1 )
t=0
{z
xt+2 )
}
that
u(x0 )
V (x1 )
":
This is true since V (x1 ) is the supremum function. Since (x0 ; x0 ) 2 (x0 ) by
~
V (x0 )
F (x0 ; x1 ) + V (x1 )
";
F (x0 ; y) + V (y)
F (x0 ; y) + V (y);
116
";
which establishes F E1:To establish FE2, choose x0 2 X; and " > 0: Then, by
SP2 and the previous Lemma
V (x0 )
or
F (x0 ; x1 ) + u(x0 ) + ";
V (x0 )
n!1
satisfy A1
then V = V :
Proof: Note that if we assumed A2x; boundedness condition is satises trivially. Hence, we could state this theorem as: "Let X; ; F; and satisfy A1 A2:
If V is a solution to the FE, then V = V ": Stating bounded condition explicitly
makes it role more transparent. We need to show that FE1 and FE2 imply SP 1
and SP 2: Note that F E1 implies that for all x 2 (x0 )
~
V (x0 )
F (x0 ; x1 ) + V (x1 )
F (x0 ; x1 ) + F (x1 ; x2 ) + 2 V (x2 )
:
:
un (x) + n+1 V (xn+1 ); n = 1; 2; :::
~
As n ! 1; limn!1
un (x);
~
P1
1
To this end x " > 0; and choose f t gt=1 in R+ such that t=1
Since FE2 holds, we can choose x1 2 (x0 ); x2 2 (x1 ); :::: so that
V (xt )
t+1 ; t
= 0; 1; :::::;
t 1
"
2.
i.e.
V (x0 )
F (x0 ; x1 ) + V (x1 ) +
1;
V (x1 )
F (x1 ; x2 ) + V (x2 ) +
2;
V (x0 )
n
X
F (xt ; xt+1 ) +
n+1
V (xn+1 ) + (
+ ::: +
n+1 )
t=0
un (x) +
n+1
xt ;
sup
1
X
fct ;xt+1 g1
t=0 t=0
ct :
s.t.
0
ct xt
xt+1 :
x0 given.
What is the value of V (x0 )? Since agent can borrow as much as he wants it is
obvious that V (x0 ) = 1: Consider now the following FE that corresponds to
this problem
V (x) = sup [x
y + V (y)] :
y
Both V (x) = 1 and Ve (x) = x are solutions to the FE, but Ve does not satisfy
boundedness condition in the previous theorem. Remember that when we solved a
118
satisfy A1
satisfy A1
t!1
V (xt )
0;
Proof: See Stokey, Lucas and Prescott (1989), Theorem 4.3, page 76).
7.1
(1)
119
< 1:
g(x) j :
Given any particular f ( ) 2 C(X); we can evaluate [F (x; y) + f (y)] for every
possible value of y and solve the maximization problem for any x: This operation
will give us a new function denoted by (T f )( ): Then, solution to (1) will be
given by a xed point of the operator T: Hence, we are trying to nd a V that
satises V = T (V ):
Given a xed point T (V ) = V 2 C(X); we can characterize the policy
correspondence G(x) given by
G(x) = fy 2 (x) : V (x) = F (x; y) + V (y)g :
(3)
Hence, if F (x; y) + f (y) and y 2 (x) satises the properties of the Maximum Theorem, we can say something about the maximized value function
T f:
Beyond the Maximum Theorem, in general we would like to show that
T : S ! S (where S is some function space). Hence, if f 2 S; then
T f 2 S: Then we know that the xed point V; T V = V; will be also an
element of S:
We can also use the Corollary to the Contraction mapping theorem to
further characterize V:
Finally, note that T denes a sequence of functions starting from any
guess f 0 (x); by f 1 = (T f 0 )(x): Sometime we will work directly on this
sequence to be able to say something about where these functions converge:
Following theorem establishes that operator T has a unique xed point, and
G(x) is compact-valued and u.h.c.
120
max
0 y f (x)
fU [f (x)
y] + V (y)g :
In this problem:
X is convex. Since X = [0; x]; where x is the highest maintainable capital
stock, X is a convex subset of R:
(x) is compact-valued. Fix any x 2 X: Since f (0) = 0 and f (x) x; (x)
is bounded. (x) = [0; f (x)] is obviously closed. Hence, it is compact.
(x) is continuous. It is bounded below and above by continuous functions,
therefore it is continuous.
121
y)
U (f (x)):
2 (0; 1):
Hence this problem satises conditions in Assumption 1 and Assumption
2.
Does T dened by
(T V )(x) =
max
0 y f (x)
fU [f (x)
y] + V (y)g ;
f (y); then
W
max
fU [f (x)
n
U [f (x)
y] + W (y)g ;
o
f (y)
y] + W
max
fU [f (x)
y] + (W (y) + a)g ;
max
fU [f (x)
y] + W (y)g + a
0 y f (x)
max
0 y f (x)
f )(x):
(T W
=
=
=
0 y f (x)
0 y f (x)
(T W )(x) + a:
Let S 0 be the set of all bounded and continuous functions, and S 00 be the
set of all bounded-continuous, and strictly-increasing functions. Then, in order
to apply the Corollary we need to show that T maps bounded and continuous
functions into strictly increasing functions.
To do this, suppose V ( ) is non-decreasing and by using the Assumptions on
F and show that T V is strictly increasing. Let x0 > x; we want to show that
T V (x0 ) > T V (x): Let y be the maximizer when the current state is x; i.e.
(T V )(x) = max [F (x; y) + V (y)] = F (x; y ) + V (y );
y2 (x)
=
<
y2 (x)
F (x0 ; y ) + V (y );
=
<
y2 (x)
F (x0 ; y ) + V (y )
max0 [F (x0 ; y) + V (y)] = (T V )(x0 ):
y2 (x )
)(x0 ; y 0 ))
F (x; y) + (1
)F (x0 ; y 0 );
123
)y 0 2 ( x + (1
)x0 ):
Theorem 155 Under the Assumptions 1-2 and 5-6 on X; (x); F; and ; V
is strictly concave and G is a continuous and single-valued.
Sketch of the proof: The fact that V is strictly concave follows from the
Corollary to the Contraction Mapping Theorem. The arguments are similar to
those in Theorem 1.2, with S 0 being the set of all bounded, continuous, and
weakly concave functions and S 00 being the set of all bounded, continuous, and
strictly concave functions. The fact that G(x) is a single-valued function, call it
g; follows from the Theorem of Maximum under the additional assumptions of
the strict concavity of the objective function and the convexity of the feasibility
set.
Again we want to show that T V (x) is strictly concave, i.e. if x = x + (1
)x0 ; then
T V (x ) > (T V )(x) + (1
)(T V )(x0 ):
Let y be the maximizer for x and y
(T V )(x) + (1
)(T V )(x0 )
= F (x; y ) + V (y ) + (1
)F (x0 ; y ) + (1
) V (y )
0
= F (x; y ) + (1
)F (x ; y ) + [ V (y ) + (1
) V (y )]
< F (x ; y + (1
)y ) + (V ( y + (1
)y );
where the last step follows from the strict concavity of F and weak concavity of
V: But then,
(T V )(x) + (1
)(T V )(x0 )
= F (x; y ) + V (y ) + (1
)F (x0 ; y ) + (1
) V (y )
0
= F (x; y ) + (1
)F (x ; y ) + [ V (y ) + (1
) V (y )]
< F (x ; y + (1
)y ) + V ( y + (1
)y )
max [F (x ; y) + V (y)] = T V (x );
y2 (x )
F [ (x; y) + (1
U [ f (x) + (1
= U [ (f (x)
> U [f (x)
)f (x0 )
y) + (1
y] + (1
)x0 )
( y + (1
( y + (1
)y 0 )]
)y 0 )]
)(f (x0 )
y 0 )]
)U [f (x0 )
y 0 ]:
is convex. In order to see that, let x; x0 2 [0; x]; and y 2 (x) and
y 2 (x0 ): Hence, that y f (x) and y 0 f (x0 ): Let x = x + (1
)x0 2
0
[0; x]; and y = y + (1
)y . Note that,
0
f (x ) = f ( x + (1
)x0 )
f (x) + (1
)f (x0 )
) y 2 [0; f (x )] = (x ):
124
y + (1
)y 0 = y
Now we also know that the solution to one sector growth problem is a strictly
concave function V:
The next theorem shows that under concavity restrictions, the policy functions associated with operator T converges uniformly. Hence, if gn (x) have
some nice properties that are preserved under uniform convergence, then g(x)
has those nice properties as well.
Theorem 156 Under Assumptions 1-2 and 5-6, if V0 2 C(X) and fVn ; gn g
are dened by
Vn+1 = T Vn ; n = 0; 1; 2; ::::
and
gn (x) = arg max [F (x; y) + Vn (y)] ; n = 0; 1; 2; ::::
y2 (x)
V(x)
W(x)
x0
One more time lets go back to our example. We know that U (f (x) y) is a
dierentiable function. Hence the solution V is dierentiable. Remember that
for the special case of
f (k) = k and u(k
k 0 ) = ln(k
k 0 );
we had show that (using a guess and verify approach) the value function took
the form
V (k) = A + B ln(k);
where A and B were constant. Not surprisingly, V (k) = A + B ln(k) is strictly
increasing, strictly concave and dierentiable. Note, however, that u(c) = ln(c)
126
that satisfy Assumptions 1-7, we can write the following FOCs for y
Fy (x; y) +
@V (y)
= 0;
@y
where Fy (x; y) indicates the derivative of the return function with respect to y
and @V@y(y) is the derivative function of the value function with respect to y: We
can also write the following envelope condition
@V (x)
= Fx (x; y):
@x
We can combine these two equations to arrive at
Fy (x; y) + Fx (x; y) = 0:
This is the Euler equation.
Remark 159 Remember that when we solved this problem using a Lagrangian
approach, we got the same Euler equation.
Again for our one sector growth model the FOC for y is
U 0 (f (x)
y)( 1) + V 0 (y) = 0;
y)f 0 (x):
g(x))( 1) + U 0 (f (g(x))
g(g(x)))f 0 (g(x)) = 0;
or
U 0 (f (x)
g(x)) = U 0 (f (g(x))
g(g(x)))f 0 (g(x)):
k 0 ) = U 0 (f (k 0 )
k 00 )f 0 (k 0 );
or
U 0 (f (kt )
kt+1 ) = U 0 (f (kt+1 )
kt+2 )f 0 (kt+1 ):
7.2
Unbounded Returns
and
A2: 8x0 2 X and x 2
~
be+1 or
1);
n!1
(x0 ); limn!1
Pn
t=0
(B)
n!1
u(x)
~
nb
V (xn )
(1) T Vb
Vb ;
128
This theorem simply states that if you can come up with a function that is
an upper bound on V ; then we can work our way from Vb to V; and V = V
Example 161 Consider the following SP
max
fkt+1 g1
t=0
ln(kt
kt+1 )
s.t.
0
kt+1
kt :
Let X = (0; 1): Then the return function is F (x; y) = F (kt ; kt+1 ) = ln(kt
kt+1 ): It turns out that function
Vb (k) =
a ln(k)
satises all of the conditions of the previous theorem. Not surprisingly, the
xed pint V is what we nd using a guess-and-verify method (see SLP page 95).
Where does Vb (k) come from? It is the maximum utility we would get if we had
every period k; and did not save at all. Then each period we would get
ln(k ) =
ln k;
ln k(1 +
a ln(k)
+ ::::::) =
129
kt ;
and k0 > 0 given. The functions u maps R+ to R and are strictly increasing
and continuous, while 2 (0; 1): Let X be the state space, let F : X X ! R
be the return function, and let : X ! X be the feasible set. Specify X; F , and
for the above problem.
Exercise 163 Consider
the following problem: choose a sequence fct ; kt+1 ; nt g
P1
for t 0 to maximize t=0 t u(ct ; 1 nt ) subject to
ct + kt+1
f (kt ; nt ) + (1
)kt ; 0
nt
1;
and k0 > 0 given. The functions u and f map R+ R+ to R+ and are strictly
increasing and continuous, while 2 (0; 1) and 2 [0; 1]: Let X be the state
space, let F : X X ! R be the return function, and let : X ! X be the
feasible set. Specify X; F , and for the above problem.
Exercise 164 Consider
the following problem: choose a sequence fct ; n1t ; n2t ; k1t+1 g
P1
for t 0 to maximize t=0 t U (ct ; 1 n1t n2t ) subject to
ct = f (kt ; n1t ) and kt+1 = n2t ;
2
to R+ and both are strictly
and k0 > 0 given. The functions U and f map R+
increasing, continuous, and strictly concave, while 2 (0; 1): Let X be the state
space, let F : X X ! R be the return function, and let : X ! X be the
feasible set. Specify these objects for the above problem.
Exercise 165 Consider the following model. There is one person, two inputs
(capital and labor) and two outputs (a consumption
good and an investment
P1
good). The person has preferences given by t=0 t u(ct ); where u : R+ ! R
and ct is date t output of the consumption good. The production function for
output of the date t consumption good is G1 (k1t ; n1t ) and that for output of the
investment good is G2 (k2t ; n2t ); where k1t + k2t
kt and n1t + n2t
1: At
date 0; k0 is given and kt+1 = (1
)kt + G2 (k2t ; n2t ); where 2 (0; 1): Aside
from the discount factor, the components of a sequential problem as dened in
Stokey, Lucas, with Prescott are as follows: X ( a set in which the state variable
lies), : X ! X (a constraint set), and F : X X ! R (a return function).
Describe X; ; and F for the above model.
Exercise 166 Let X = R+ , F (x; y) = 0 for all (x; y); and (x) = [0; x= ],
where
2 (0; 1) and is the discount factor. Here X is the state space, F is
the return function, and is the feasible set. (a) Show that v(x) = x satises
the associated functional equation.(b) Is v(x) = x the maximized value of the
objective in the corresponding sequential problem? Explain.
130
n1t
c1t
1
; with
; with
2 (0; 1);
2 (0; 1):
Show that the sequence of chopped trees will follow the following equation
t+1
1
+
t:
Argue that the relationship between the initial set of trees s0 and the se1
quence of chopped trees f t gt=0 can be used to pin down 0 ; and hence to
1
dene the whole sequence f t gt=0 :
Exercise 169 This problem tries to get you acquainted with the Principal of
Optimality (i.e. Theorems 4.2 and 4.3 in Stokey, Lucas and Prescott 1989)
Consider the consumption-saving decision of an innitely lived agent with initial
assets x0 2 X = R: The agent can borrow or lend at rate 1 + r = R = 1 > 1:
Hence, price of borrowing one unit for tomorrow is : There are no borrowing
constraints, i.e.
ct + xt+1 xt
131
sup
fct ;xt+1 g1
t=0
1
X
ct :
t=0
s.t.
0
ct xt
xt+1 :
x0 given.
What is the value of V (x0 )? Consider the following FE that corresponds to this
problem
V (x) = sup [x
y + V (y)] :
y
Show that both V (x) = 1 and Ve (x) = x are solutions to the FE. Does Ve
satisfy boundedness condition in Theorem 4.3 (page 72 in SLP). Remember that
when we solved a SP using a Lagrangian approach, we needed the Transversality
Condition (TC). Hence, if a sequence of actions satisfy the Euler equation, it
was the optimal solution if it also satised the TC. Argue (with some carefully
selected sentences) that the TC and the bounded condition in Theorem 4.3 (page
72 in SLP) serve the same purples.
Example 170 Consider the standard one sector growth model,
k 0 )) + V (k 0 )g:
fU (f (k)
V (k) = max
0
k
By using FOCs for this problem show that if V is strictly concave k 0 = g(k) is
increasing in k: Remember that the FOC for k 0 is
U 0 (f (k)
k 0 )( 1) + V 0 (k 0 ) = 0:
Note that this equation characterizes k 0 given V 0 : Hence, we can use the Implicit
Function Theorem to dene
H(k 0 ; k) = U 0 (f (k)
k 0 )( 1) + V 0 (k 0 ):
Then,
g 0 (k) =
dk 0
=
dk
Hk
=
Hk 0
U 00 (f (k) k 0 )f 0 (k)( 1)
:
U 00 (f (k) k 0 )( 1)( 1) + V 00 (k 0 )
Since U 00 < 0; and V 00 < 0; g 0 (k) > 0: Using FOCs and the Implicit function
theorem to characterize g is a powerful tool that is often used. Note that often
we know if V is strictly increasing or not, or concave or not.
Example 171 Consider the following version of the one sector growth model:
Again each agent start his/her life with k0 ; and lives forever. Agents has to
decide each period how much to work (hence labor supply is endogenous) and
132
how much to save. Each agent has one unit of time. Let nt denote the labor
supply. Then, the production function is now
f (kt ; nt ) = F (kt ; nt ) + (1
)kt ;
nt ):
1
X
U (f (kt ; nt )
kt+1 ; 1
nt ):
t=0
Let write down the DP problem facing an agent. Note that the capital stock is
still the only state variable in this economy (i.e. at each date agents only need
to know kt in order to be able to make decisions on nt and kt+1 ): Then,
V (kt ) = max [U (f (kt ; nt )
kt+1 ; 1
kt+1 ;nt
nt ) + V (kt+1 )] :
kt+1 ; 1
nt )( 1) + V 0 (kt+1 ) = 0;
kt+1 ; 1
kt+1 ; 1
nt )( 1) = 0;
where Ui and fi indicate the derivative w.r.t. ith argument. The FOC for nt
does not involve V: We will call it a static FOC. Note that it simply states that
U1 (f (kt ; nt )
|
kt+1 ; 1
{z
nt ):
}
When we have a static FOC, we can always solve it rst and then focus on the
dynamic FOC. In this example, we can use the FOC for nt to nd the optimal
work decision
nt = N (kt ; kt+1 );
and then substitute it in the FOC for kt+1
U1 (f (kt ; N (kt ; kt+1 ))
kt+1 ; 1
133
A representative agent now sees zt ; and then make a decision on kt+1 : Hence,
a representative agent needs to know both kt and zt in order to be able to decide
on kt+1 : Then, the SP will be
"1
#
X
t
max1 E0
U (f (kt ; zt ) kt+1 ) :
fkt+1 gt=0
t=0
max
0 k0 f (k;z)
fU [f (k; z)
k 0 ] + E [V (k 0 ; z 0 )]g ;
where z 0 is the value of next period shock which is unknown at the current period
when k 0 is chosen, and E( ) is the expected value operator. Hence, we have two
new features in the stochastic case:
First, the state of the problem now consists of both the current capital stock
k and current shock z: Therefore, in order to be able to analyze how the
solution of this problem behaves over time, we need to keep track of both
k and z.
Second, we need to characterize what we mean by the expression E( ).
Example 173 Consider again the stochastic version of the one-sector growth
model with = 1: Hence,
f (k; z) = zf (k) = F (k; 1):
Furthermore, suppose z can only take values from a nite set Z = fz1 ; z2 ; : : : ; zn g :
In this case a probability distribution over Z is simply an assignment of probabilities ( 1 ; 2 ; : : : ; n ) to each element of Z. Since i s are probabilities,
i
= Pr(z = zi ),
0 8i; and
max
0 k0 f (k)zj
U [f (k)zj
k ]+
n
X
i=1
n
X
= 1:
i=1
V (k ; zi )
; for each j:
Example 174 Now let z can only take values from a nite set Z = fz1 ; z2 ; : : : ; zn g ;
but follow a Markov process. Let ij denote the probability that the next periods
shock will be zj given that todays shock is zi . The matrix formed by ij for all
i; j is called a transition matrix.
2
3
:::
11
12
1n
6 21
7
:::
22
2n 7
=6
4
5
:::
n1
n2
n3
134
n
X
V (y; zj )
j=1
ij
9
=
;
Example 175 Finally suppose Z = [z; z]: Then we have to dene a distribution
function for z: Suppose z is iid with
H(a) = prob(z
Then, we can write the DP problem as
(
V (k; z) =
max
U [f (k; z)
0 k0 f (k;z)
k ]+
a):
z
0
[V (k ; z )] dH(z ) ;
On the other hand if the future distribution of z depend on its current realization,
i.e. if
H(ajz) = prob(z 0 ajz);
we will write
V (k; z) =
max
0 k0 f (k;z)
U [f (k; z)
k ]+
z
0
[V (k ; z )] dH(z jz) :
Rl :
< 1; a > 0:
where W (k) is a continuous, increasing, bounded and strictly concave function dened on [0; k] and f (k) is a continuous, increasing and bounded function
dened on [0; k]: Does T map strictly concave functions into strictly concave
functions? Prove or provide a counter-example.
135
x
1
; V (x) ; with
max fU [f (x)
0 y f (x)
y] + v(y)g
u(ct ; lt );
t=0
136
Exercise 182 A rm maximizes the present value of cash ows, with future
earnings being discounted at rate : Income at time t is given by sales pt :qt ;
where pt is the price of good and qt is the quantity produced. The rm behaves
competitively and therefore takes prices as given. It knows that the prices evolve
according to a law of motion given by pt+1 = f (pt ): Total or gross production
depends on the amount of capita, kt ; and labor nt and on the square of the
di erence between current ratio of sales to investment xt and the previous period
ratio. This last feature captures the notion that changes in the ratio of sales to
investment require some reallocation of resources within the rm. It is assumed
that the wage rate is constant and equal to w: Capital depreciates at rate : The
rms problem is
1
X
t
max
[pt qt wnt ] ;
t=0
subject to
qt + xt
"
g kt ; nt ;
kt+1
(1
qt
xt
qt
xt
2
1
1
)kt + xt
pt+1 = f (pt )
q 1
> 0 given.
k0 > 0 and
x 1
We assume that g is bounded, increasing in rst two arguments and decreasing
in last argument. Formulate this problem as a dynamic programming problem.
Exercise 183 A workers instantaneous utility unction u(:) depends on the
amount of market goods consumed c1t and also amount of home produced goods
c2t : In order to acquire market produced goods the worker must allocate some
amount of time l1t to market activities that pay a salary of w: The worker takes
wages as given. It is known that the wages move according to wt+1 = h(wt ):
The quantity of home produced goods depend on the stock of expertise that the
worker has at the beginning of the period, which we label by at : This stock of
expertise depreciates at rate and can be increased by allocating time to nonmarket activities. Hence, the agents problem is
max
1
X
u(c1t ; c2t )
t=0
subject to
at+1
c1t
wt lt ;
c2t
f (at );
(1
l1t + l2t
137
)at + l2t ;
l;
wt+1 = g(wt );
a0 > 0 given.
It is assumed that u and g are bounded and continuous. Formulate this problem
as a dynamic programming problem.
Exercise 184 Consider the problem of choosing a consumption sequence ct to
maximize
1
X
t
(ln ct + ln ct 1 ); 0 < < 1; > 0;
t=0
subject to
ct + kt+1
given.
1
X
u(at )
t=0
P1
subject to t=0 at s; and at 0:This problem which is a dynamic programming problem with = 1; is known as the cake-eating problem. We begin with
a cake of size s and we need to allocate this cake to consume in each period of
an innite horizon.
Show that this problem may be written as a dynamic programming problem.
That is describe formally the state and choice spaces, the reward function,
and the feasibility correspondence.
Show that if u : A ! R is increasing and linear, i.e. u(a) = ka; for some
k > 0 then the problem always has at least one solution. Find a solution.
Show that if u : A ! R is increasing and strictly concave, then the problem
has no solution.
138
Exercise 186 Write down the Bellman equation for the following problem.
Consider the problem of a worker who faces a wage o er w every period from
a distribution F (W ) = prob(w
W ): If the worker accepts the o er he commits to work for at wage w forever. If he declines the o er, he can search one
more period and have a new wage o er from the distribution F: If the worker is
unemployed, he receives unemployment compensation c:
Exercise 187 Write down the Bellman equation for the following problem.
Consider the problem of a rm which faces a rm specic productivity shock
st each period. The price p for the rms product and wage rate w are constant over time. Output is a function of employment level nt and st and given
by f (nt ; st ): Upon observing st ; rm decides how much labor to employ for the
current period. Changing the level of employment implies an adjustment cost
that is given by g(nt ; nt 1 ): The shocks that rm faces next period depends on
the value of the shock in current period. Firm has also the option of exit for the
next period.
Exercise 188 Consider the problem of a monopolist introducing a new product.
The monopolist desires to maximize the present-value of his prots. He faces
a constant interest rate r: In each period the monopolist faces the downward
sloping demand curve described by
p = D(q) with D(0) = p; D(1) = p and D0 (q) < 0:
That is, according to demand schedule D the monopolist can sell q units of
output at a price of p: Marginal revenue, or D(q) + qD0 (q); is assumed to be
strictly decreasing in q: Let per unit cost of production be given by c. This cost
declines convexly with production experience, e; according to
c = C(e); with C(0) < p; C(1) > p; C 0 (e) < 0; and C 00 (e) > 0:
Production experience is taken to be the cumulative sum of past production, so
the law of motion governing experience is given by
e0 = e + q:
a)Formulate the monopolist dynamic programming problem. Hint: Cast the
relevant maximization problem in terms of the production experience variable.
b) Compute the rst order condition associated with this problem. Interpret it.
Prove that the decision-rule for q is strictly increasing. c) Consider the problem
of myopic monopolist who maximizes current period prots. Who produces the
most: the myopic or farsighted monopolist?
Exercise 189 Consider the following version of one sector growth model. Economy is populated by identical representative agents who want to maximize
X
t
Eo
u(ct ; nt ); with 0 < < 1;
139
subject to
ct + gt + kt+1
F (kt ; nt );
and
0
nt
1; ct
0; kt+1
0; and k0 given.
Here ct is time t consumption, nt is hours worked, and gt is government spending. Government spending does not create any utility to the consumers. It is
iid and drawn from the following cumulative distribution each period
Pr [gt
a] = G(a):
Agents make decisions after they observe the level of government spending. Assume u1 > 0; u2 < 0; u11 < 0; u22 < 0; u21 < 0; and F1 > 0; F2 > 0; F11 < 0;
F22 < 0: Write down the dynamic programming problem for a representative
agent (or equivalently for a central planner).
1. Drive the FOC for nt : Show that
dnt
dgt
0:
2. Drive the FOC and Euler equation for kt+1 : Is kt+1 increasing or decreasing in gt ?
3. Will your results about the properties of policy function on part c change,
if gt was serially correlated?
Exercise 190 Consider the following savings-consumption problem for an innitely lived agent
"1
#
X
t
2
max U = E0
(a + "t )ct bct
t=0
s:t: at+1
and ao
= Rat + yt
> 0 given
ct
t;
0<
< 1 and
c1
1
140
> 0:
t ~iid
Assume that the gross interest rate Rt is independently and identically distributed and is such that E(Rt1 ) < 1= : Consider the problem of a representative agent who wants to maximize
E
1
X
u(ct ); 0 <
< 1;
t=0
subject to
At+1
Rt (At
ct ); and Ao given,
1
X
U (ct ; lt )
t=0
subject to
ct + it = F (kt ht ; lt )
and
kt+1 = (1
and
k0 > 0 given
Here ct is consumption and lt is labor supply. Production for a given capital stock kt and labor supply lt is given by F (kt ht ; lt ): The variable ht is the
intensity of factor utilization for capital which is a choice variable. When ht
is high, you get more services form a given capital stock. Using capital stock
more intensely, however, has a cost. It makes capital stock depreciate faster,
i.e. 0 > 0: Moreover, let 0 < < 1 and 00 > 0: Hence in this economy depreciation is not xed, but given by function (h): There is an investment specic
technology shock here. The productivity of the exiting capital at time t; kt ; is
not a ected by this technology shock. When "t is high, however, it is cheaper
to make investment since a given unit of [F (kh; l) c] can create more investment and a higher capital stock next period. Let "t be an autocorrelated random
variable with cumulative distribution ("t+1 j"t ); and let "t 2 Q = [", "]:
Let
U (ct ; lt ) = U (ct
G(lt ));
with U 0 > 0; U 00 < 0; G0 > 0; G00 > 0: Find the marginal rate of substitution between consumption and labor given this particular utility function.
What is the signicance of the MRS that this particular utility function
implies?
141
fct ; kt+1 g1
t=0
1
X
u(ct );
t=0
subject to
ct + kt+1
k0
ct
F (kt ; lt );
> 0 is given,
0; kt+1 0; lt
lbt :
The agent has k0 > 0 units of capital at time 0. The agent also has 2 units of
labor endowment in even periods and 1 units of labor endowment in odd periods,
i.e.
(b
l0 ; b
l1 ; b
l2 ; :::) = (2; 1; 2; 1; ::::);
where b
lt denotes the labor endowment at time t: Formulate the problem of maximizing consumer utility subject to feasibility constraint as a dynamic programming problem.
Exercise 194 Consider the following version of the one sector growth model
with linear utility
1
X
t
ct ;
max 1
fct ; kt+1 gt=0
t=0
subject to
ct + kt+1
k0
ct
f (kt );
> 0 is given,
0; kt+1 0:
Here f (kt ) represents the total amount of goods available at time. The function
f is strictly increasing, strictly concave with f (0) = 0: The representative agent
1
has k0 units of capital at time 0; chooses a sequence of fct ; kt+1 gt=0 to maximize
P
1
t
ct :
t=0
Write down the dynamic programming problem associated with this sequential problem.
142
Write down the FOC, the Envelope condition, and the Euler equation associated with the dynamic programming problem. Using the Euler equation
show that the optimal decision rule for an interior solution is
kt+1 = g(kt ) = (f 0 )
where (f 0 )
=k ;
143
Deterministic Dynamics
fxt+1 gt=0
1
X
F (xt ; xt+1 );
(SP)
t=0
subject to
xt+1 2 (xt ); t = 0; 1; 2; :::;
and
x0 given,
where F is a bounded return function and is a feasibility correspondence.
We know that (see Chapter 4 of SLP) the maximized value function
V (x0 ) =
max1
fxt+1 gt=0
1
X
F (xt ; xt+1 );
t=0
(FE)
max
0 k0 f (x)
fU (f (k)
k 0 ) + V (k 0 )g ;
(52)
where k is the current period capital stock, k 0 is the next period capital stock
to be chosen, f ( ) is the production function, U ( ) is the utility function, and
is the discount factor. Assume that,
1. f : R+ ! R+ ; and U : R+ ! R are continuous, strictly concave, strictly
increasing, and continuously dierentiable.
2.
2 (0; 1)
144
4. The maximum in (52) is attained at a unique value g(k); and the policy
function g( ) is continuous.
5. Given any k0 ; the sequence kt+1 = g(kt ) denes an optimal path for the
capital stock, i.e. it is a solution for SP.
We want to characterize g( ) in order to understand how the optimal sequence of the capital stock behaves over time. In particular, we would rst like
to nd stationary points of g (the points where g(k) = k); and then try to gure
out if and how the economy moves over time towards these stationary points.
We also know under these assumptions that the solution is everywhere interior, and g(k) is characterized by the rst order and the envelope conditions
(52),
U 0 (f (k) g(k)) = V 0 (g(k));
(53)
and,
V 0 (k) = U 0 [f (k)
g(k)]f 0 (k):
(54)
g(k)) = V 0 (g(k));
g(k 0 )) = V 0 (g(k 0 )):
g(k))
V 0 (g(k))
=
:
g(k 0 ))
V 0 (g(k 0 ))
g(k))
> 1;
g(k 0 ))
V 0 (g(k))
V 0 (g(k 0 ))
1;
leading to a contradiction.
9.1
Stationary Points of g( )
145
and
V 0 (k) = U 0 [f (k)
k]f 0 (k):
(56)
(57)
Since f ( ) is continuous and strictly increasing, there will be a unique k satisfying (57) given by,
k = (f 0 ) 1 (1= ):
(58)
This is our unique candidate for a stationary point.
We showed that g(k) = k implies (58) (the necessary condition), if we can
also show that (58) implies g(k) = k; we will have a full characterization of the
stationary points of g( ):
In order to get the su cient condition, note that the strict concavity of V
implies that for any k and k 0 ;
[V 0 (k)
[V 0 (k)
V 0 (k 0 )][k
V 0 (k 0 )][k
k0 ]
k0 ]
0; and
= 0 if and only if k = k 0 :
V 0 (g(k))][k
V 0 (g(k))][k
g(k)]
g(k)]
0; and
= 0 if and only if k = g(k):
U 0 [f (k)
g(k)];
and,
V 0 (k) = U 0 [f (k)
g(k)]f 0 (k):
leading to
[V 0 (k)
V 0 (g(k))] = U 0 [f (k)
g(k)](f 0 (k)
):
Since U 0 ( ) > 0;
[f 0 (k)
[f 0 (k)
1= ][k
1= ][k
g(k)]
0; and
g(k)] = 0 if and only if k = g(k):
146
In order to characterize the behavior of the economy out of the steady state,
note that since k is the unique steady state, if k 6= k ;
[f 0 (k)
1= ][k
g(k)] < 0:
Since f ( ) is concave,
f 0 (k)
( ) 1= , k
( )k :
Therefore,
g(k) > (<) k , k < (>) k ;
and g(k) will look like Figure 20.
This analysis shows that: (i) there is a unique steady state k > 0; (ii)
given any k0 > 0; economy will converge to k (global stability), and (iii) the
convergence is monotone.
g(k)
k*
k*
9.1.1
A Non-monotone Example
Consider an economy with two sectors, one producing consumption goods and
the other producing investment goods. The agents have one unit of time. The
147
nt :
U (1
y2[0;1]
y)f
+ V (y) ;
(59)
where k is the current capital stock, and y = 1 n is the next period capital
stock to be chosen. The production function for capital goods imply that y 2
[0; 1]: [Check that V ( ) is strictly increasing and strictly concave, and the policy
function g : [0; 1] ! [0; 1] is continuous.]
Lets nd the stationary points of g( ): In this example, k = 0 is no more a
stationary point. Since capital goods are produced by labor only, an economy
with k = 0 will not be stuck there.
The rst order and envelope conditions for this problem are given by,
U 0 [1
g(k)]f
k
g(k)
V 0 (k) = U 0 [1
k
g(k)
g(k)]f
k
f0
g(k)
1
1
k
g(k)
f0
k
g(k)
= V 0 (g(k));
k
:
g(k)
k
1
f0
k
1
= 0:
(60)
V 0 (g(k))][k
V 0 (g(k))][k
g(k)]
g(k)]
0; and
= 0 if and only if k = g(k);
k
g(k)
we get,
U 0( ) f 0
k
g(k)
1
1
k
f0
g(k)
k
g(k)
[k g(k)]
hence,
U 0( )
f0
k
g(k)
k
g(k)
148
k
g(k)
[k
g(k)]
0;
0;
and since U 0 ( ) > 0; (60) is also a su cient condition for a stationary point.
We again have a unique stationary point k which is characterized by (60).
But it is obvious that g( ) will not look like Figure 20. If k is near to 0, it is
optimal to spend most of the time to produce capital, hence near g(0) will be
close to one and g( ) will be decreasing.
In order to investigate this possibility further, let
U (c) = c ; and f (z) = z where 0 <
< 1:
k (1
k
g(k)
)k
(1
g(k))
k
g(k)
k
g(k)
= V 0 (g(k));
resulting in
(1
(g(k))
(1
) 1
= V 0 (g(k)):
1 g(k)
1 g(k 0 )
(1
) 1
V 0 (g(k))
:
V 0 (g(k 0 ))
If g(k 0 ) = g(k); the LHS<1, and RHS=1. If g(k 0 ) > g(k); LHS<1, and
RHS>1. Hence g(k) is decreasing.
10
Euler Equations
Consider the Euler equation for the standard one sector growth model,
U 0 (f (kt )
kt+1 ) = U 0 (f (kt+1 )
kt+2 )f 0 (kt+1 );
(61)
where U ( ) and f ( ) are utility and production functions, is the discount factor,
and kt is the capital stock. We know that given any initial capital stock k0 ; the
sequence of capital stocks fkt g that satises (61) and the transversality condition
is an optimal solution for this problem. Note that (61) is a (non-linear) second
order di erence equation in kt : Hence, analyzing how the optimal capital stock
moves over time amounts to analyzing how this dierence equation behaves. As
we have already shown that, in this example there exists a unique stationary
capital stock given by, f 0 (k ) = 1= ; and it is possible to characterize how
economy moves from any given k0 towards k :
For a general dynamic programming problem (X; ; A; F; ) where,
1. X is a convex subset of Rl ;
2.
2 (0; 1);
(62)
t!1
(63)
Hence, any sequence of state variables fxt g satisfying (62) and (63) is a
solution for SP given x0 .
For the one-sector growth model
F (k; k 0 ) = U (f (k)
k 0 );
therefore,
Fy (xt ; xt+1 ) = U 0 (f (kt )
kt+1 )( 1)
and,
Fx (xt+1 ; xt+2 ) = U 0 (f (kt+1 )
kt+2 )f 0 (kt+1 )
which gives us (61). In order to be able to analyze the equations like (61) we
need to study how to solve dierence equations.
10.1
(64)
+ : : : + B q zt
Bq
0
0
0
32
76
76
76
76
54
}|
(65)
q+1:
zt
zt
zt
q+1
{z
xt
3
7
7
7
7
5
}
lq) matrix.
(66)
z = C + Azt
z = C + Azt
Az = A(zt
z) = Axt :
I)e = 0:
is
I) = 0:
The values of that satisfy this equation are called eigenvalues of A: For each
eigenvalue i ; then (A
I)e = 0 will give the corresponding eigenvector ei :
Denition 199 A square matrix A is diagonalizable, if there exists an invertible matrix P such that P 1 AP is diagonal.
Theorem 200 Let A be an l l matrix. If l eigenvectors of A are linearly
independent, then it is diagonalizable, and B 1 AB = ; where B is the matrix
of eigenvectors, and
is a diagonal matrix with the eigenvalues of A in the
diagonal.
Theorem 201 If l eigenvalues of A are all di erent, then its eigenvectors are
linearly independent, hence A is diagonalizable.
151
10.1.1
Two-dimensional case
Let 1 and
izable as
2;
with
6=
zt
2;
=B
AB;
where
b11
b21
B=
b12
b22
with [b11 b21 ]0 and [b12 b22 ]0 are the eigenvectors corresponding to
and
0
1
=
:
0
2
and
2;
zt+1 = B
1
A(BB 1 )zt = B
| {zAB}(B
| {z }
zt ) = B
zt :
Let zbt = B
zt : Then,
or
Then,
x
bt+1
ybt+1
x
bt+1 =
x
bt =
bt
1x
t
b0
1x
and ybt+1 =
and ybt =
x
bt
ybt
bt :
2y
t
b0 :
2y
b11
b21
b12
b22
which implies
xt = b11
t
b0
1x
+ b12
152
t
b0
1x
t
b0
2y
t
b0 ;
2y
(67)
and
yt = b21
1
t
b0
1x
zt for
"
eb
= e11
b21
1
t
b0 :
2y
+ b22
(68)
t = 0; i.e. using,
#
eb12
xt
;
eb22
yt
; we have
x
b0 = eb11 x0 + eb12 y0 and yb0 = eb21 x0 + eb22 y0 :
t e
1 (b11 x0
t e
1 (b11 x0
+ eb12 y0 ) + b12
+ eb12 y0 ) + b22
t
2
t
2
eb21 x0 + eb22 y0 ;
eb21 x0 + eb22 y0 :
1j
< 1 and j
2j
If j
0):
1j
> 1 and j
2j
> 1; then xt !
If j
1j
< 1 and j
2j
1 and yt !
(69)
(70)
1
and
1 (unless x
b0 = yb0 =
eb21 x0 + eb22 y0 = 0:
(71)
Hence, there is a particular set of initial conditions that put the system
into a stable path (usually referred as a saddle path). In other words,
initial conditions y0 and x0 are not independent of each other.
In this case, i,e, if (71) holds, we have
xt = b11
And
t e
1 (b11 x0
b11
yt ;
b21
t e
1 (b11 x0
+ eb12 y0 ):
and we know exactly how xt and yt are related for all t > 0:
153
(72)
10.1.2
General Case
=B
A=B B
AB; or
and we have
xt = B
x0 :
(73)
Therefore,
2 Pl
e
j=1 b1j xj;0
6 Pl e
6
j=1 b2J xj;0
B 1 x0 = 6
4
Pl e
j=1 blj xj;0
7
7
7:
5
l X
l
X
i=1 j=1
l
X
i=1
bki
t
i
3 2 Pl Pl e t
b1i bij xj;0
x1;t
Pli=1 Plj=1 e it
6 x2;t 7 6
i=1
j=1 b2i i bij xj;0
6
7=6
4
5 6
4
Pl Pl e t
xl;t
i=1
j=1 bli i bij xj;0
2
0
@
l
X
j=1
7
7
7:
5
1
ebij xj;0 A :
(74)
Note that this exactly equations (69) and (70) we derived for the two dimensional
case. We are interested in the behavior of xt as t ! 1: From (74) we have the
following immediate result.
Theorem 202 If all eigenvalues of A are less then 1 in absolute value, then
xt ! 0 as t ! 1; since ti ! 0 for all i:
Suppose not all of the eigenvalues are less than one in absolute value, but only
m < l of them are less then one in absolute value. Without loss of generality,
suppose that rst m of the eigenvalues are less than one in absolute value and
last l m are greater than one in absolute value. Then it is obvious from (74)
that we need additional restrictions on the initial values:
154
m
X
bki
i=1
0
1
l
X
t@
ebij xj;0 A ;
i
j=1
j=1
Pl
Note that the condition j=1 ebij xj;0 = 0 for all i > m is exactly equation
(71) that we got for the two dimensional case.
This theorem implies that for the stability of the system the initial values
associated with the stable eigenvalues must form a basis for the system. To
see this note that the condition on P
the initial values for the state variables
l
associated with unstable eigenvalues, j=1 bij xj;0 = 0 for all i > m; imply that
the l m initial values associated with unstable eigenvalues are functions of
m initial values associated with stable eigenvalues. In other words, the initial
values associated with the instable eigenvalues should not be exogenous to the
system.
Specically,
l
X
j=1
or,
2 e
bm+1;m+1 ebm+1;m+2
6 ebm+2;m+1 ebm+2;m+2
6
4
ebl;m+1
ebl;m+2
{z
|
(l m) (l m)
m
X
j=1
ebm+1j xj;0 +
ebm+1;n 32 xm+1;0
ebm+2;n 76 xm+2;0
76
54
ebn;n
xl;0
{z
}|
(l m) 1
l
X
j=m+1
7
7 =
5
}
ebm+1j xj;0 = 0;
2 Pm
e
j=1 bm+1;j xj;0
6 Pm e
6
j=1 bm+2;j xj;0
6
4
Pm e
j=1 bl;j xj;0
Hence, the initial values associated with unstable eigenvalues can not be independent of the initial values associated with stable eigenvalues. The subspace
in Rm formed by the initial values associated with stable eigenvalues is called
the stable manifold of the system.
155
7
7
7:
5
10.2
where F x and F y are l 1 vectors of constants, Fxx , Fyy , and Fxy are l l
0
is the transpose of Fxy . Note that the rst
matrices of constants, and Fxy
derivative of a quadratic function of (x; y) can only have a constant, terms with
x; or terms with y: Hence, for the quadratic case the Euler equations become
0
F y + Fxy
xt + Fyy xt+1 + fF x + Fxx xt+1 + Fxy xt+2 g = 0;
(75)
(F y + F x ):
Fxy1 Fxy zt +
x: If Fxy is
(76)
J
I
K
0
zt+1
zt
where
J
1
1
Fxy1 Fxy ;
156
(77)
l matrices. Let
zt+1
zt
Zt =
and A =
J
I
K
0
(78)
x1
x2
0
0
x2 = 0:
If Fxy and (Fxy + Fyy + Fxx + Fxy ) are non-singular, one can show that A
will be non-singular (left as an exercise). If A is non-singular, then 6= 0: Then
x1 = x2 ; and the two equations reduces to
2
(K + J
2
implying that (K + J
I)x2 = 0;
I) is singular. Hence,
det(K + J
I) = 0;
and
is the characteristic root of A if and only if it satises this equation.
Using the values of K and J this equation becomes
det(
0
Fxy1 Fxy
I) = 0;
0
Fxy
+
(Fyy + Fxx ) +
Fxy ) = 0:
(79)
0
Fxy
+b
or
det(
since (
0
Fxy
+
(Fyy + Fxx ) +
Fxy ) = 0:
) is a constant
2
) det(
0
Fxy
+
(Fyy + Fxx ) +
Fxy ) = 0:
(80)
is a root of
Previous Lemma implies that the roots of A come in almost reciprocal pairs
and we can have at most l roots smaller than one. Indeed if exactly l of the
eigenvalues are smaller than one in absolute value then we have the following
global stability result.
Theorem 205 Let F : R2l ! R be a strictly concave, quadratic function. Let
(x) = Rl for all x 2 Rl , and 0 < < 1: Assume that Fxy and (Fxy + Fyy +
Fxx + Fxy ) are non-singular, and let x be the unique stationary pint. Assume
A has l characteristic roots less than one in absolute value, then 8x0 2 Rl , there
exits a unique solution fxt g to the optimization problem. This sequence satises
(75) and has limt!1 xt = x:
To gain some intuition about this theorem, let l = 1; then
zt+2
zt+1
c
1
d
0
zt+1
zt
(81)
where c and d are some constants. We know from our analysis of two-dimensional
case that if one of the eigenvalues is less than one in absolute value and the other
one is greater than one in absolute value, then we need the following condition
eb12 z1 + eb22 z0 = 0:
(82)
Note that this restriction implies a relation between z1 and z0 : The basic idea is
that for any z0 ; z1 can be chosen to satisfy this equation. This will be the case
if eb12 6= 0: Suppose not, i.e. eb12 = 0: Then, both (z0 = 0; z1 = 0) and (z0 = 0
and z1 = "); would satisfy this restriction contradicting the fact that there is at
most one sequence that is optimal.
10.3
Linear Approximations
This Theorem gives us a powerful tool for analyzing the stability of dynamic
programming problems with quadratic return functions. What happens when
the return function is not quadratic, and the Euler equations are not linear. The
following theorem deals with this problem and develops a local stability result.
Theorem 206 Suppose the dynamic programing problem (X;
ises
158
; A; F; ) sat-
< 1:
F is strictly concave,
F ( (x; y) + (1
)(x0 ; y 0 ))
F (x; y) + (1
)F (x0 ; y 0 );
)y 0 2 ( x + (1
(x) and
)x0 ):
=
|
J
I
K
0
{z }
zt+1
zt
1
1
Fxy1 Fxy : Suppose A has l charwhere J =
Fxy1 (Fyy + Fxx ); and K =
acteristic roots less than one in absolute values, then there exists a neighborhood
U of x such that if x0 2 U; then fxt g satises limt!1 xt = x:
Consider the standard one-sector growth model where F (x; y) = U (f (x) y),
the steady state k is characterized by
f 0 (k ) = 1;
and the Euler equation is given by
U 0 (f (kt )
kt+2 ) = 0:
00
159
k )+
U 00 ( )(kt
U 00 ( )(kt+2
k ) + (1 +
)U 00 ( ) + f 00 (k )U 0 ( ) (kt+1
k )
k );
where U ( ) indicate that the function is evaluated at the steady state values.
Hence we have the second order system given by
(kt+2
k )=
(kt
k ) + (1 +
)+
f 00 (k )U 0 ( )
(kt+1
U 00 ( )
kt+1 k
kt k
k ):
Hence A is the matrix which governs the behavior of this linearized system
around k :
We could also derive A using the formulas from previous Theorem,
= U 0 (f (x) y)f 0 (x);
= U 00 (f (x) y)(f 0 (x))2 + U 0 (f (x)
=
U 0 (f (x) y);
= U 00 (f (x) y);
=
U 00 (f (x) y)f 0 (x);
Fx
Fxx
Fy
Fyy
Fyx
y)f 00 (x);
U 00 (f (x)
00
and
Fxy =
y) +
y)f 00 (x)
U 00 ;
=
=
1
1+
f 00 =f 0
;
U 00 =U 0
160
1
2
(U 00 + U 00 (f 0 ) + U 0 f 00 )
U 00
and
1
K=
0
Fxy1 Fxy
=
leading to A
A=
J
I
K
0
"
(1 + 1 ) +
I
f 00 =f 0
U 00 =U 0
trace(A) + det(A) =
Note that
R(0) =
R(1) =
and
R(1= ) =
(1 +
)+
f 00 =f 0
U 00 =U 0
f 00 =f 0
;
U 00 =U 0
1 f 00 =f 0
:
U 00 =U 0
k =
1 (kt
k );
where 1 is the eigenvalue that is less than 1 in absolute value. Again from our
analysis of the two dimensional case, we know that this should correspond to
kt+1
k =
b11
(kt
b12
k ):
1;
then
b11
b12
1 b12
b11
=
b12
1:
where
2 (0; 1). The rst order and envelope conditions are given by
k
(1
k0 )
)(1
(1
) 1
( 1) + V 0 (k 0 ) = 0;
and
V 0 (k) =
k0 )
(1
(1
(1
k0 )
)(1
(1
) 1
k0
(1
k 00 )
(1
b) Let g(k) = k; then the necessary condition for a stationary point is given
by,
1
=
)k =
:
1 k
k
1
+
Note that k is the unique candidate for a stationary point. It is easy to verify
that it is also su cient.
c) Lets linearize the Euler equation around k ,
( 1)
(k )
(
( 1)
divide by ( 1)
have
1
(k )
(1
)(1 (k ) (1 ) 1 )[kt k ] +
(1
)( (1
) 1)(1 (k ) (1 ) 2 )[kt+1 k ] +
2
1)(k )
(1 (k ) (1 ) )[kt+1 k ] +
( (1
))(k ) (1 (k ) (1 ) 1 )[kt+2 k ];
( (1
(kt+2
))(k ) (1
(k )
k ) + B(kt+1
where
B=
(1
) 1
1
k )+
(1
) 1
+
(1
)
(kt
k ) = 0;
:
Hence,
(kt+2
(kt+1
k )
k )
=
|
0
{z
A
(kt+1 k )
(kt k )
and the behavior of the system depends on the roots of A. Note that 1 +
1
B; and 1 2 = det(A) =
: You can easily check what
2 =trace(A) =
conditions we need to have one of the roots being greater than one and other
being less than one in absolute value.
162
11
Consider the stochastic case of the standard one sector growth model. Output is
given by f (k)z where z is a stochastic technology shock. The Bellman equation
for this problem will be
V (k; z) =
max
fU [f (k)z
0 k0 f (k)z
k 0 ] + E [V (k 0 ; z 0 )]g ;
(83)
where z 0 is the value of next period shock which is unknown at the current
period when k 0 is chosen, and E( ) is the expected value operator. Hence, we
have two new features in the stochastic case: First, the state of the problem
now consists of both the current capital stock k and current shock z: Therefore,
in order to be able to analyze how the solution of this problem behaves over
time, we need to keep track of both k and z. Second, we need to characterize
what we mean by the expression E( ).
11.1
Preliminaries
Suppose z can only take values from a nite set Z = fz1 ; z2 ; : : : ; zn g : In this
case a probability distribution over Z is simply an assignment of probabilities
( 1 ; 2 ; : : : ; n ) to each element of Z. Since i s are probabilities,
i
= Pr(z = zi ),
0 8i; and
n
X
= 1:
i=1
max
0 k0 f (k)z
U [f (k)z
k0 ] +
n
X
i=1
V (k 0 ; zi )
(84)
Denition 208 Function ( ) denes a probability measure over any given family Z of subsets of Z that includes ; and Z if it satises
i) (A)
X
i
163
(Ai ):
How large can we pick the family of subsets of Z and dene a probability
measure over it? Obviously, when Z is nite, we can take Z to be any family
of subsets of Z; including the set of all subsets of Z; and dene a probability
measure. If Z is not nite, then we can not dene a probability measure on all
subsets of Z that has the obvious adding-up property for disjoint unions. What
properties a family of subsets must have so that we can dene a meaningful
probability measure over it?
Denition 209 Let S be a set and S be a family of subsets of S. S is called a
-algebra if
a); 2 S, S 2 S
b)A 2 S )Ac = S n A 2 S
c)An 2 S, n = 1; 2; : : : ) [n=1 An 2 S.
Hence the main properties of a -algebra are closures under complementation
and countable union.
Denition 210 (S; S) where S is a -algebra is called a measurable space, and
any A S is called a measurable set.
Denition 211 (S; S, ) where S is a -algebra and
over S is called a probability space.
is a probability measure
If S is a nite set the family of all subsets of S obviously constitutes a algebra. The family of all subsets of a nite set S is called the complete -algebra
for S; and it is the -algebra routinely used for nite sets.
Denition 212 Given a measurable space (S; S), a real-valued function f :
S ! R is a measurable function w.r.t. S if
fs 2 S : f (s)
ag 2 S for all a 2 R;
11.2
Transition Functions
We would also like to analyze the problem where the expected value of the
next period shock depends on the current value of the shock. An autoregressive
process over the shocks will be such a case. Then a general form of (83) will be
V (x; z) = max fF (x; y; z) + E [V (y; z 0 ) j z]g ;
y2 (x;z)
(85)
where E [V (y; z 0 ) j z] denotes to fact that the expected value of V (y; z 0 ) depends
on z: Suppose Z was a nite set. Then instead of assigning unconditional
probabilities for each element of Z; we need to dene conditional probabilities
or transition probabilities. Let ij denote the probability that the next periods
shock will be zj given that todays shock is zi . Then (85) becomes
8
9
n
<
=
X
V (x; zi ) = max
F (x; y; zi ) +
V (y; zj ) ij :
(86)
;
y2 (x;zi ) :
j=1
(87)
Each row of a transition matrix must sum up to one. Note that each row of
a transition matrix denes a probability measure over the all subsets of Z: A
transition matrix naturally denes a transition function.
Denition 214 Let Z be a nite set and Z be the complete -algebra of Z: A
transition function : Z Z ! [0; 1] satises:
i) for all z 2 Z; (z; ) is a probability measure,
ii) for all A 2 Z, ( ; A) is a measurable function.
Note that for every combination of current shock and a subset of next period
shocks
assigns a probability. Hence, a transition function does two things:
First, for any given value of current shock z; it denes a probability measure
over next period shocks (each row of matrix
above). Second, for any given
subset of next period shocks it nds the probability distribution over current
shocks of moving into that subset .
Using a transition matrix we can dene two important operators:
165
Denition 215 For any Z-measurable function f; the Markov operator T over
f is given by
X
(T f )(zi ) =
f (zj ) (zi ; zj ); all zi 2 Z:
j
Denition 216 For any probability measure over (Z; Z); adjoint of Markov
operator T over is given by
X
(T )(A) =
(zi ; A) (zi ); all A 2 Z:
i
Note that T f (zi ) is the expected value of f next period if the current shock
is zi , and T (A) is the probability that next period shock lies in the set A; if
the current state is drawn according to the probability measure : Hence, T
denes a probability measure over next period, if is the probability measure
over the current period.
11.3
ij
j=1
9
=
;
(88)
B] =
(zk ; B) if g(x; zk ) 2 A
;
0 if g(x; zk ) 2
=A
where x 2 X; zk 2 Z; A 2 X, and B 2 Z.
Suppose X is also a nite set. Then X is the complete -algebra for X:
Given the current state (x; zk ), P [(x; zk ) , A B] gives the probability that
next period state will be in the set A B: This will happen with probability
(zk ; B) if the optimal choice x0 is in the set A:
Example 217 Consider the following dynamic programming problem from Greenwood, Hercowitz, and Haufman (AER, 1988)
Z
V (kt ; "t ) =
max
U (ct ; lt ) +
V (kt+1 ; "t+1 )d ("t+1 j "t )
(89)
ct ; kt+1 ; lt ; ht
166
s.t.
kt+1
kt
+
(1
1 + "t
1 + "t
ct = F (kt ht ; lt )
(ht )):
rs
ss
= Pr "0 = e s 1 j " = e
= 1: Then (2.6) becomes
"
V (ki ;
r)
rr
rs
sr
ss
U (c; l) +
max
c; k0 ; l; h
1 ; 0
1; and
ij
2
X
rs V
rr
rs
= 1; and
(k 0 ;
s)
s=1
s.t.
ct = F (ki h; l)
k0 e
+ ki e
r]
(1
r)
(h)):
r)
2 K E:
Using this probabilities we can dene a transition matrix P over the state space
of this problem. Note that the state is given by
S = f(k1 ;
which has 2
r]
rs
(90)
12
Markov Chains
167
Let
=[
ij ]
0;
ij
ij
= 1:
j=1
Suppose p is the probability distribution over the current state, then the
distribution over next period state is given by
2
3
:::
11
12
1l
6 21
7
:::
22
2l 7
pl 6
pb = p = p1 p2
4
5
:::
:
:
:
l1
l1
ll
h P
i
Pl
Pl
l
=
:
p
p
:
:
:
p
i=1 i i1
i=1 i i2
i=1 i il
P
Pl
P
Note that pb 2 l since
bj =
jp
i=1 pi
j ij = 1: Similarly, if p is the
probability distribution over current state then the probability distribution over
two periods ahead is given by (p ) = p(
) = p 2:
As we did with the deterministic case, we would like to know what happens
to the state of the problem as we follow the optimal policy from any given inial
value. In the deterministic case the optimal policy rule g(x) provided us with
all the information we need about the behavior of the model.
Suppose we are given some initial probability distribution p0 over Z. The
long run behavior of the state space is given by p0 n . On the other hand, if the
initial state is zi , the probability distribution over states n period ahead will be
simply the ith row of n . The steady state for this problem will obviously be a
stationary probability distribution over Z.
Denition 218 A set E
Z is called an ergodic set, if (zi ; E) = 1 for all
zi 2 E; and if no other proper subset of E has this property.
Hence, if the current state is in an ergodic set then with probability one next
period will be also in this ergodic set. It is obvious that there will be a close
link between ergodic sets and the stationary distribution.
Denition 219 An invariant distribution p over Z is a probability distribution
such that p = p :
Our main concern is with the conditions under which p0
12.1
Examples
3=4 1=4
1=4 3=4
168
! p as n ! 1:
1=2 1=2
1=2 1=2
lim
n!1
=Q=
converges to
1=2
3=4 1=4
1=4 3=4
1=2
1=2
0
2
where
2n
and
(
are k
(l
n
2)
0
(
k) and (l
n
1)
and
0
k)
2n+1
;
k Markov matrices. Then
(
0
n
1)
(
2
n
2)
In this example there is only one ergodic set Z itself. But ergodic set has
cyclically moving subsets. If the system begins in C1 = fz1 ; z2 ; : : : ; zk g Z then
after any even number of periods it will be back in the set C1 ; and after every
169
Q 0
0 Q
2n+1
and
0 Q
Q 0
where Q =
average does
3
1=4
1=4 7
7
1=4 5
1=4
distribution:
3
1=4
4=4 7
7 = 1=4
0 5
0
1=2 1=2
1=2 1=2
1=4
1=4
1=4
and
n
1
and
Then
lim
n!1
0
n
2
0
2
converge. Let
3=4 1=4
1=4 3=4
1=2 1=2
6 1=2 1=2
=6
4 0
0
0
0
3
0
0
0
0 7
7:
1=2 1=2 5
1=2 1=2
In this case there are two invariant distributions p1 = (1=2; 1=2; 0; 0) and p2 =
(0; 0; 1=2; 1=2) : Note also that any convex combinations of p1 and p2 are also
invariant distributions. Contrary to the previous examples, where the economy
170
1
0
0 5;
1
=1
(1
0
12.2
Invariant Distributions
Let l be the l-dimensional unit simplex. We already show that the transition
matrix maps l into itself. Hence, if we can show that l is a complete metric
space with some appropriate metric and denes a contraction mapping on l ;
then we can use the contraction mapping theorem to show that has a unique
xed point in l :
Let k k denote the norm on Rl dened by
kxk =
l
X
i=1
jxi j:
T qk = kp
l
l
X
X
j
(pi
j=1
l X
l
X
j=1 i=1
q k =
qi )(
l
l
X
X
j
(pi
j=1
"j ) +
ij
i=1
j pi
=
l
X
ij
j pi
qi j
i=1
l
X
(pi
i=1
qi j (
= (1
"j ) +
l
X
j=1
l
X
ij
qi )
ij
i=1
"j j
l
X
i=1
qi )"j j
(pi
qi ) j
"j ) + 0
j=1
")kp
qk
kN
p k
(1
n
"(N ) )k kp0
p k
g ! p for all p0 2
2
p1
6
p1
f ng ! 6
4
p1
p2
p2
p2
172
3
: : : pl
: : : pl 7
7;
5
:::
: : : pl
must
where pi is the unique probability of being in state i in the long run. Then for
(N )
some N su ciently large there is at least one column j for which ij > 0
(otherwise p will not dene a probability distribution over Z), then "(N )
(N )
"j > 0
173
13
sup
1
X
fxt+1 g1
t=0 t=0
F (xt ; xt+1 );
then this function should satisfy the following functional equation (FE);
V (x) = sup [F (x; y) + V (y)] ;
(FE)
subject to
y 2 (x):
The supremum function V (x0 ) tells us the innite discounted value of fol1
lowing the best sequence fxt+1 gt=0 : Our strategy was that rather than nding
1
the best sequence xt+1 t=0 ; we can try to nd the function V (x0 ) as a solution to the FE and use the associated policy rule y = g(x) to analyze the
optimal sequence. We then showed that: 1) The supremum function V has to
satisfy FE, and if there is a solution to FE, then it is the supremum function.
2) There is indeed a solution V to FE. We also analyzed: 1) Properties of V: 2)
Dynamic behavior implied by the policy function, y = g(x):
Our analysis so far was based on a planners (or a representative agents)
problem. Now we will talk about market economies. We will imagine a large
number of representative agents interacting in a market economy and try to
understand what conditions have to be satised for this market economy to be
in equilibrium. Once we move to a market economy we have to be clear about:
Ownership structure (who own capital, hence who decides on capital accumulation).
Which markets are open (goods, capital, labor).
Economic agents (households and rms).
In general a market equilibrium will be a situation such that for some given
prices, individuals and rms decisions are such that markets clear. There is more
than one way to think about the markets. We have already seen Arrow-Debrue
economies with time zero trade (i.e. all agents participating in a big market at
time 0); and economies with sequential markets where assets are traded every
period. We will now introduce a third equilibrium concept called the recursive
competitive equilibrium that is most suitable for analysis of dynamic economies.
Recursive competitive equilibrium is based on the idea that dynamic programming problems can be split into decisions about today and the entire future.
As you remember the key in our dynamic programming problem was the idea
of the state, i.e. the variables that provide all the information we need to make
decisions. In a recursive competitive equilibrium the prices are denes as functions of the state. Hence, in a recursive competitive equilibrium both individual
decisions (characterized by a value function and a decision rule) and the prices
will be functions of the state.
174
We will now dene the recursive competitive equilibrium for our one-sector
growth model. We will imagine an economy that is populated by a large number
of identical agents (households). The households own both capital and labor.
They rent their capital and labor to a single rm that produces output by a
constant returns to scale technology and pays the rental rate and the wage rate
to the households. Household decide how much of their total resources (which
consist of undepreciated capital, rental income and wage income) to consume
and how much of it to save as future capital.
We will denote an individual capital holdings by k; and the aggregate stock
of capital by K: Suppose each household has one unit of time and start with k0
units of capital. Let rt and wt be period t rental rate and the wage rate (we
do not know yet how they are determined). The households want to maximize
their lifetime utility by choosing the optimal consumption/saving path given a
set of prices. Hence, the agents problem is
X
t
max
u(ct );
((HHP))
ct ;kt+1
s.t.
ct + kt+1 = wt + (1 + rt
)kt = wt + (1
)kt + rt kt ;
and
k0 > 0 given.
Note that since agents do not value leisure they will supply all of their time to
the rm.
The rm faces a simple prot maximization problem each period given by
max (F (Kt ; Nt )
Kt ;Nt
rt K t
wt Nt ) for all t;
(FP)
The rst order conditions associated with the rms maximization problem are
rt = FK (Kt ; Nt );
and
wt = FN (Kt ; Nt );
where Kt and Nt are aggregate capital stock and labor demanded by the rm.
Note that in equilibrium it must be the case that Nt = 1: We also know
that in equilibrium, the households will supply all of their capital stock to the
rm, i.e. Kt = Kt , hence
rt = FK (Kt ; 1);
and
wt = FN (Kt ; 1):
These FOCs dene for every value of aggregate capital stock a rental rate and
a wage rate. We will therefore dene a rental rate function r : K ! R+ ; and a
wage function w : K ! R+ as
r = r(K) and w = w(K):
175
Therefore, in each period if each household knows the aggregate capital stock
K; then they know exactly what the current rental rate and the wage rate are.
Hence, an household needs to know both k and K to be able to solve its dynamic
programming problem. Furthermore, each household also needs to know how
K evolves over time. The aggregate capital stock, however, evolves over time as
a result of the decisions of all of the households. This implies that we will need
a consistency condition.
Lets rst look at the household problem. Let V (k; K) be the value function
for a household with k units of capital if the aggregate capital stock is K: This
value function is dened as
V (k; K) = max
[u(c) + V (k 0 ; K 0 )] ;
0
c;k
(HHP)
subject to
c + k 0 = r(K)k + (1
)k + w(K);
and
K 0 = G(K):
Note that the solution to this problem will imply a law of motion for individual
capital given by
k 0 = g(k; K) = arg max(HHP )
Here, G(K) is the law of motion for the aggregate capital stock. The household
need to know G in order to be able to predict K 0 : Of course in equilibrium G is
not an arbitrary object.
Now we can dene a recursive competitive equilibrium (RCE): A RCE is a
set of functions for quantities G(K) and g(k; K); for utility level V (k; K); and
for prices r(K) and w(K) such that:
V (k; K) solves (HHP) and g(k; K) is the associated policy function.
Prices are competitive, i.e.
r(K) = FK (K; 1);
and
w(K) = FN (K; 1):
Individual and aggregate decisions are consistent, i.e.
G(K) = g(K; K) for all K:
Note the following:
1. The third condition is the key feature of a RCE. It requires that whenever the individual consumer is endowed with aggregate capital stock, his
individual behavior is exactly same as the aggregate behavior.
176
2. We did not mention the price of the aggregate output. Other prices are
in terms of output.
3. We did not mention a market clearing condition such as
C + K 0 = F (K; 1) + (1
)K:
)K = r(K)K + (1
)K + w(K):
where c( ) and h( ) are innite sequences of consumption and leisure, U is continuously dierentiable in both arguments, U1 > 0; U2 > 0, and U is strictly
concave.
Let K and H denote the aggregate capital stock and labor supply. Firms
2
! R
has access to the following CRS production technology F (Kt ; Ht ) : R+
with F1 > 0; F2 > 0; F (0; 0) = 0; and F is concave in K and H separately.
Moreover,
Yt = ezt F (Kt ; Ht ) with zt+1 = zt + "t+1 ;
" ):
)Kt + Xt ;
)Kt + Yt :
Kt ; Ht
rt K t
177
wt Ht ) for all t;
where rt is the rental cost of capital and wt is the wage rate. The rst order
conditions for this problem is given by
rt = ezt FK (Kt ; Ht ); and wt = ezt FH (Kt ; Ht )
The representative households problem in this economy is given by the following
Bellman equation
V (z; k; K)
max [U (c; 1
c; x; h
h) + E(V (z 0 ; k 0 ; K 0 ) j z)]
s:t: c + x
r(z; K)k + w(z; K)h
0
k = (1
)k + x
0
K = (1
)K + X(z; K)
0
z =
z + "; c 0; 0 h
1;
h) + E(V (z 0 ; k 0 ; K 0 ) j z)]
V (z; k; K)
s:t: c + k 0
K0
z0
c; k0 ; h
)k
where c and k are individual consumption and capital stock, K is the aggregate
capital stock and investment. Note that r(z; K) and w(z; K) indicate the fact
that these prices depend on the aggregate capital stock.
REMARK: In the rst formulation the households is given pricing functions w and r as well as an aggregate investment function X: This allows consumer to gure out current income as well as future aggregate capital stock
K 0 : In the second formulation, the households is given a function that maps
(z; K) to K 0 directly.
Then a RCE for this economy is a value function V (z; k; K); a set of decisions rules c(z; k; K); h(z; k; K); and x(z; k; K); corresponding aggregate decision rules C(z; K); H(z; K); and X(z; K); and factor prices w(z; K) and r(z; K)
such that:
V (z; k; K) solves the household problem with associate solutions given by
c(z; k; K); h(z; k; K); and x(z; k; K):
Firms FOCs are satises, i.e.
r(z; K) = ez FK (K; H(z; K)); and w(z; K) = ez FH (K; H(z; K)):
178
179
14
14.1
1
X
U (ct )
(P1)
t=0
subject to
ct + kt+1 = f (kt ) and k0 > 0 given.
The DP problem associated with P1 is given by
V (kt ) =
max
fU (f (kt )
kt+1 ) + V (kt+1 )g :
(P2)
First order and envelope conditions for this problem are given by
U 0 (f (kt )
kt+1 ) = V 0 (kt+1 );
(FOC)
and
V 0 (kt ) = U 0 (f (kt )
kt+1 )f 0 (kt ):
(Envelope)
Suppose the utility function and the production function take the following
parametric forms:
c1
;
U (c) =
1
and
f (k) = k a :
Then, the steady state level of capital stock, k = kt = kt+1 ; is given by the
solution to the following Euler equation
U 0 (k a
k) = U 0 (k a
k)ak a
which gives us
k = (a )1=1
Hence, given a set of parameters we can nd the steady state value of capital
stock.
Remark 227 We know that given any value of k0 > 0, this economy converges
monotonically to k : Hence, when we choose a grid for k below, we should make
sure that k is within that grid.
In order to be able to solve P2 numerically, rst assume that the capital
stock can only take values in a discrete set given by
kt 2 K = fk1 ; k2 ; : : : ; kN g ; 8t:
180
This will make maximization over next periods capital stock k 0 much more
easier since we only need to search over a nite number of possibilities.
Given this discrete set, we can dene the following iteration on K
V n (ki ) = max
0
k 2K
k0 f (kt )
U (f (ki )
k0 ) + V n
(k 0 ) ; i = 1; : : : ; N:
(P3)
U (f (ki )
k0 ) + V n
(k 0 ) ; i = 1; : : : ; N:
(P4)
k 2K
k0 f (ki )
Start with some initial guess for the value function V (k); denoted by V 0 (k);
on this set K: Note that for a discrete set a function is simply a list of numbers
corresponding to each element in that set. Hence, we can take
3
2 3
2 0
0
V (k1 )
0
6
7
6
0 7
V
(k
)
2
7
7
=6
V 0 (ki ) = 6
5
4 5:
4
0
V 0 (kN ) N 1
Then, for i = 1; : : : ; N
V 1 (ki )
max fU (f (ki )
k 0 ) + 0g ;
k 2K
k0 f (ki )
max fU (f (ki )
k1 ) ; U (f (ki )
k2 ) ; : : : ; U (f (ki )
kN )g :
N 1
we can also store the maximizing values for k 0 (assuming that it is unique)
as our policy function
2
g 1 (k1 ) = arg max fU (f (k1 ) k1 ) ; U (f (k1 ) k2 ) ; : : : ; U (f (k1 ) kN )g
6 g 1 (k2 ) = arg max fU (f (k2 ) k1 ) ; U (f (k2 ) k2 ) ; : : : ; U (f (k2 ) kN )g
g 1 (ki ) = 6
4
g 1 (kN ) = arg max fU (f (kN ) k1 ) ; U (f (kN ) k2 ) ; : : : ; U (f (kN ) kN )g
Now, we can proceed to the next iteration and get for i = 1; : : : ; N;
8
9
>
>
=
<
2
0
1 0
V (ki ) =
max
U (f (ki ) k ) + V (k ) ;
| {z }>
k0 2K >
:
;
0
k
6= 0
f (ki )
3
7
7
5
N 1
where the values for V 1 (ki ) comes from the matrix that we stored in the previous
iteration.
Let
m=
norm(V n+1 V n )
;
norm(V n )
and continue iterating until m < "; where " is a small number.
If our problem satises some nice properties, then we know that these iterations will converge to the unique value function of this problem. In the nal
iteration, we can save the value function and policy function in order to analyze
how this economy behaves:
2
3
2
3
V (k1 )
g(k1 )
6 V (k2 ) 7
6 g(k2 ) 7
7
7
; g(ki ) = 6
V (ki ) = 6
4
5
4
5
V (kN ) N 1
g(kN ) N 1
Remark 228 Matlab program growmodel.m implements a discrete state space
solution for nonstochastic one sector growth model.
Remark 229 Note that we can easily add endogenous labor supply decision to
one sector growth model. If we do that, we will have a static maximization
problem for labor supply. Indeed, before going into value function iteration we
can nd the optimal labor supply decisions for all combinations of k and k 0 .
Then, we can use these labor supply values whenever we need them. Matlab
program growmodel2.m solves nonstochastic version of one sector growth model
with endogenous labor supply decision. The utility function is assumed to have
the following form
(1
) log(c) + log(1 n):
Note that the program starts with nding the optimal labor supply decisions and
utility values for all feasible combinations of (k; k 0 ); and then enters the value
function iteration stage. It uses fsolve.m, a built-in Matlab function tha1t nd
the zero of a function of one variable. The function solvelab.m calculates, for
any given value of n; the value of the FOC for n.
Remark 230 Note that we use the set K = fk1 ; k2 ; : : : ; kN g in above algorithm
in two places: First, we dened value functions on K: Second, we nd the
maximizer in the following problem from the set K
g n (ki ) = arg max
k0 2K
k0 f (ki )
U (f (ki )
k0 ) + V n
(k 0 ) ; i = 1; : : : ; N:
182
(k) = V n
(ki ) +
Vn
Vn
ki
(ki+1 )
ki+1
(ki ) and V n
(ki )
(k
(ki+1 );
ki ):
14.2
Stochastic Case
1
X
U (ct )
t=0
subject to
ct + kt+1 = exp(zt )f (kt );
and
ln zt = ln zt
ut
|{z}
iidN (
u; u)
max
fU (exp(zt )f (kt )
(P5)
The state for this economy is now given by st = (kt ; zt ):
Hence, in order to be able to apply discreet state space methods, we also
need to dene a grid for zt : Suppose zt can only take values in a nite set given
by
z 2 Z = fz1 ; z2 ; : : : ; zM g :
183
V(ki+1)
V(k)
V(ki+2)
V(ki+1)- V(ki)
V(ki)
ki
ki+1
ki+2
k - ki
ki+1 - ki
184
Then, one can represent the autocorrelation for zt using a transition matrix
2
3
:::
11
12
1M
6 21
7
6
7
;
=6 .
7
.
4 .
5
M1
MM
where
ij
= Pr [zt+1 = zj j zt = zi ] ;
0 8i; j;
ij
and
M
X
ij
= 1;
8i; 9j s.t.
> 0:
ij
max
k0 2K
exp(zj )f (ki )
M
X
jr
Z as, for i = 1; : : : ; N;
fU (exp(zj )f (ki )
Vn
r=1
k0 )
(P6)
(k 0 ; zr ) g;
(91)
and
g n (ki ; zj )
arg
max
k0 2K
exp(zj )f (ki )
M
X
jr
Vn
r=1
[U (exp(zj )f (ki )
k0 )
(k 0 ; zr ) g:
(92)
Then,
6
6
= 6
4
V 0 (kN ; z1 )
0
0
..
.
0 ::: 0
V 1 (ki ; zj ) =
max
k0
k0 2K
exp(zj )f (ki )
(P7)
V 0 (kN ; zM )
space it will
3
7
7
7
5
7
7
7:
5
fU (exp(zj )f (ki )
185
k 0 ) + 0g =
N M
2
6
6
6
6
6
6
6
4
max
k0 2K
exp(z1 )f (k1 )
fU (exp(z1 )f (k1 )
k0 2K
k0 exp(z1 )f (k2 )
fU (exp(z1 )f (k2 )
k0
max
k 0 )g max
k0 2K
exp(z2 )f (k1 )
fU (exp(z2 )f (k1 )
k0 2K
k0 exp(z2 )f (k2 )
fU (exp(z2 )f (k2 )
k0
k 0 )g max
..
.
V 1 (kN ; z1 )
V 1 (k1 ; zM )
k 0 )g
..
.
V 1 (kN ; zM )
Once we have stored V 1 (ki ; zj ) and g 1 (ki ; zj ); we can move to the next
iteration and compute:
(
)
M
X
1 0
V 2 (ki ; zj ) =
max
U (exp(zj )f (ki ) k 0 ) +
;
jr V (k ; zr )
0
k 2K
k0 exp(zj )f (ki )
r=1
where jr are given exogenously and V 1 (ki ; zr ) are the values that we stored
in the previous iteration. We can keep repeating this procedure until we have
convergence.
14.2.1
k 0 )g : : :
Tauchens Method
ut
|{z}
iidN (
u; u)
with a transition matrix : If the only information we had was the information
on ; u ; and u ; could we come up with the entries on ? One way to achieve
this is to use Tauchens method
1. Determine a grid for Z: Let z1 = z q z and zq = z + q z ; where q is
some integer value, and z and z are the unconditional mean and standard deviations for z: Then simply pick M equally spaced points between
z1 and zq to form
X = fz1 ; : : : ; zM g :
2. Let the distance between each points be represented by w = zk zk 1 :
Then for each i; if j 2 f2; : : : ; M 1g ; the transition probabilities are
given by
pij
Pr[zj
= F
zj
w
zi + u
2
zi + w2
zj +
zj
w
2
where
Pr[ut
w
]
2
zi
a] = F (
7
7
7
7
7
7
7
5
z2
z2 w/2
z2 + w/2
w
3. Finally
z1
pi1 = F
zi +
w
2
and
piM = 1
zM
zi +
w
2
14.2.2
Simulations
So far we have looked at the numerical solutions of the nonstochastic and stochastic versions of the one sector growth model. We showed how we can use
discrete state space methods to nd V and g: Once we nd value and policy
functions, we would like to know how these articial economies behave.
In the nonstochastic case, since the capital stock is the only state variable,
g(k) gives us all the information we need. Given an initial capital stock k0 ;
we can analyze, using g(k), how this economy evolves, i.e. how the optimal
sequence fkt g1
t=0 behaves. We know that under standard assumptions on utility
187
and production functions, g(k) is strictly increasing, and has a unique positive
stationary point dened by
k = (f 0 )
(1= ):
(93)
c1
1
and
f (k) = k a + (1
)k;
then,
k =
1
a
(1
1=
12
11.5
11
10.5
10
9.5
9.5
10
10.5
11
current capital
11.5
12
12.5
188
11.4
11.2
11
capital stock
10.8
10.6
10.4
10.2
10
9.8
10
20
30
40
50
time period
60
70
80
90
100
subject to
ct + kt+1 = exp(zt )F (kt ):
Matlab code bc.m solves a particular version of this problem in which
u(c) =
c1
1
; and F (k) = k ;
and zt follows
z t = zt
+ u;
u:
In bc.m, the
0.95
0.3
0.81
0.02
Figure 25 shows the decision rule for kt+1 that bc.m generates. Note that each
line is a decision rule for a given value of z:
Given these decision rules, we would like to know how this economy behave.
In this case, we have to analyze how the capital stock and the exogenous shocks
behave jointly.
In order do this, let pij;rs be the probability that the current capital stock
is ki and the current shock is zj and the economy moves to a state where the
capital stock is kr and the shock is zs : That is,
pij;rs = Pr [k 0 = kr ; z 0 = zs j k = ki ; z = zj ] :
189
0.19
0.18
0.17
0.16
0.15
0.14
0.13
0.13
0.14
0.15
0.16
0.17
0.18
0.19
0.2
js :
Hence,
js :
What is the probability that we end up in kr next period, given that the current
state is s = (ki ; zj )? We know the policy function g(k; z): Hence, given s =
(ki ; zj ); we know exactly what is the choice for the optimal capital stock. Then,
pij;rs =
js ; if g(ki ; zj ) = kr
:
0; otherwise
Consider the following ordering of the all possible states for this economy:
S = f(k1 ; z1 ); (k2 ; z1 ); : : : ; (k1 ; zM ); (k1 ; z2 ); : : : ; (kN ; zM )g ;
and construct the following transition matrix on S
2
p11;11
p11;21 : : : p11;N M
6 p21;11
6
P =6 .
4 ..
pN M;11
pN M;N M
3
7
7
7
5
NM
;
NM
190
That is
6
6
p0 = 6
4
with
p0ij
p011
p021
..
.
p0N M
0; 8i; j; and
3
7
7
7
5
NM
N X
M
X
pij = 1:
i=1 j=1
Then, we can iterate on p0 using the transition matrix P to get the next
periods distribution over S
p1 = P p0 :
If we keep iterating on
pn = P pn
Et [(exp(zt+1 )kt+11 ]
Et [(exp( zt + u)kt+11 ]:
191
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0
0.13
0.14
0.15
0.16
0.17
0.18
0.19
0.2
0.03
0.025
0.02
0.015
0.01
0.005
0
0.15
0.155
0.16
0.165
0.17
0.175
0.18
0.185
Figure 27: Long run disrtibution of capital with narrow grid for k
192
Since only random variable here is u; the above equation can be rearranged as
1
+ zt + (
1) ln kt+1 + ln Et [exp(u)]:
2 2
t =2):
Then,
Et [exp(u)] = M (1) = exp(
2
u =2);
+ zt + (
1) ln kt+1 + ln[exp(
2
u =2)]:
Therefore,
ln kt+1 =
1
(1
[ln
+ zt +
2
u
]:
1
(1
[ln
2
u
];
and
V ar(ln kt+1 )
=
=
=
1
)2
(1
1
V ar( zt )
)2
(1
1
(1
2
u
)2 (1
2 2
u
:
2)
> 0 in bc.m;
Determine z2 using [ j1 ; j2 , j3 ; :::; jm ] : Note that this is a probability distribution over Z: To determine the value for z2 ; we can use
a random number generator. For example Matlab command rand
gives a uniform random number between 0 and 1. Suppose we draw
such a random number. Call it u: Now if u is less than j1 ; then we
will set z2 = z1 : If it is larger than j1 ; but less than j1 + j2 ; we
will set z2 = z2 (note that subscript on the left hand side refers to
time and the one on the right hand side refers to the index from the
set Z); and so on. This way we can determine z2 : Suppose, given
u;we have z2 = z5 ; then we will use [ 51 ; 52 , 53 ; :::; 5m ] ; and a new
random uniform number and determine z3 ; etc.
Often you generate a series longer than T; and discard the rst set
of simulations to avoid the eect of initial z.
2. Once you have T periods z; you can start from some value of k (for example, you can set k1 to the expected value of k using p ); and, given k1 and
z1 ; get k2 = g(k1 ; z1 ): Then, you can get k3 = g(k2 ; z2 ); etc. This way we
can generate time series for yt ; ct ; kt that we can compare with the data.
3. This way we can generate all the variables that we care about. Usually
this procedure is repeated several times to get multiple simulations for
each variable we care about.
After nding V; g; and p ; bc.m generates simulated data. It rst generates
100 period long productivity shocks (in total 20 such series). Figure 28 shows
examples of such time series for z:
Once we have a time series for z; we can use it to generate time series for
100
other variables we care about using g(k; z): In order to do this, let fzt gt=0 be one
of the time series bc.m generates. Furthermore let k0 be the mean of capital
stock in Figure 26. Then, y0 = exp(z0 )k0 ; k1 = g(k0 ; z0 ); y1 = exp(z1 )k1 ;
k2 = g(k1 ; z1 ); ::::: This way we can construct a series for yt : Figure 29 shows
one such series for yt that bc.m generates.
The next question is how we can compare time series that our articial
economy generates with the U.S. data.
Hodrick-Prescott Filter In order to be able to compare our simulated data
with the U.S. data, we will rst look at the U.S. data. The following gure
shows the U.S. real quarterly GDP between 1947 and 2003. At a rst glance it
does not look anything like the picture that comes out of our model. But our
model did not have growth, so we have to remove the growth component from
the data. Furthermore, we are mainly interested in business cycle uctuations,
i.e. uctuations that occur with a frequency of 3 to 5 years.
The common procedure in the real business cycle literature is to use HodrickT
Prescott (HP) lter that works as follows. Consider a series fYt gt=1 : Suppose
Yt consists of two components: a trend component and a cyclical component:
Yt =
+ Ytd :
194
first seri
second seri
0.1
0.1
0.05
0.05
-0.05
-0.05
-0.1
50
100
-0.1
150
50
tenth seri
100
150
100
150
twientieth seri
0.1
0.15
0.1
0.05
0.05
0
0
-0.05
-0.1
-0.05
50
100
-0.1
150
50
0.66
0.64
0.62
0.6
0.58
0.56
0.54
0.52
0.5
20
40
60
80
195
100
120
0.06
0.04
0.02
-0.02
-0.04
-0.06
20
40
60
80
100
120
HP lter pick
min
f
T
t gt=1
T
X
t=1
(Yt
2
t)
T
X1
[(
t+1
t)
2
t 1 )]
t=2
In this problem there is a trade o between the extent to which the trend
component tracks the actual series against the smoothness of the trend. If
= 0; then Yt = t ; Ytd = 0; and if ! 1; then t approaches a linear trends.
For quarterly data, it is customary to set = 1600:This way HP lter eliminates
uctuations at frequencies lower than 32 quarters (8 years).
The following pictures show the actual data together with the trend component as well as deviations from the trend.
Suppose we apply the same procedure to our simulated data. Figure 30
shows the cyclical component in the model. The basic comparison is between
the cyclical component in the data and the cyclical component in the model.
14.2.3
In order to simulate the model, we need to choose functional forms for u and f
and specify parameter values. We also have to specify the stochastic structure
for z: How can we do this? Prescott (1986) uses long run (secular) growth
observations to choose functional forms and parameters:
Observation 1: In the U.S. economy, the capital and labor shares of output
has been relatively constant, while their relative prices change over time. This
196
n ;
with labor share parameter : Furthermore, the average value of labor share
in total output in the U.S. has been about 64%. Hence, we set
= 0:64.
Observation 2: In the U.S. economy, real wages have increased over time,
yet per capita market hours have been relatively constant. This suggests a unit
elasticity of substitution between consumption an leisure. Hence, the following
functional form is appropriate
u(c; 1
(c1
n) =
n) )1
(1
n) = (1
) log(c) + log(1
n):
log(zt )
(log(Yt+1 ) log(Yt )) (1
(log(Nt+1 ) log(Nt ));
)(log(Kt+1 )
log(Kt ))
where Yt ; Kt and Nt are aggregate output, capital and labor. Setting = 0:64;
one can construct a time series for zt and analyze its statistical properties.
Remaining Parameters: we still need to determine ; ; and : In order to
choose these variables, consider again the DP given by
V (kt ; zt ) = max f(1
kt+1 ;nt
) log(c) + log(1
n) + Et V (kt+1 ; zt )g ;
subject to
zkt1
ct + kt+1
nt + (1
)kt
kt+1 ;
zt+1 = zt + "t :
Then, the FOC for kt+1 is given by
(1
1
= Et V1 (kt+1 ; zt+1 );
ct
1
ct+1
(zt (1
197
)kt nt + 1
);
(P1)
1
zt kt1
ct
nt
nt
Finally, we can write the following Euler equation that determines the accumulation of capital
(1
1
= Et (1
ct
1
(zt+1 (1
ct+1
)kt+1 nt+1 + 1
) :
)kt+1 nt+1 + 1
ct+1
= ((1
ct
) =)
1
kt+1
nt+1
+1
kt+1
):
1 1
k
ct t
nt
1
1
nt
1 kt1 nt
1
=
;
ct
nt
1 nt
or in steady state
y1 n
=
c n
(1
(C2)
In the U.S. economy, rate of return on capital is about 4%. This suggests
1
= 1 + 0:04 =)
k
y
= 0:96:
1
+1
2:6
198
y
) ;
k
1
= 0:097
0:96
Finally, given = 0:64; n = 1=3; and yc = 1:3 (again the ratio for the U.S.
economy), and using
y1 n
=
;
c n
(1
)
we have
(1
= 0:64(1:3)
2=3
= 0:936 = 1:664;
1=3
which implies
=
1:664
= 0:625:
2:664
Remark 231 See Cooley and Prescott (1995) for a much more detailed discussion of the basic business cycle regularities and the calibration procedure.
The standard RBC model relies on a single productivity shock to generate
business cycles. The success of the model is judged by its performance in
generating unconditional second moments of de-trended data. Table 1 and 2
(taken from Hansen and Wright (1992)) shows the basic set of observations that
a standard RBC model tries to replicate. These observations are: a) relative
volatility of dierent variables (output, consumption, investment, productivity,
and labor input); b) contemporaneous correlations between variables (correlations of consumption, investment and labor input with output as well as the
correlation between productivity and labor input).
Table 3 (again taken from Hansen and Wright (1992)) shows the performance
of the standard RBC model. A comparison between Tables 1 and 2 and Table 3
reveals that: a) The standard model is not able to generate the level of volatility
we observed in the data. b) It does relatively well with respect to relative
volatilities; c) It does a poor job of generating the low correlation between
hours and productivity (which is 0.93 in the standard model).
Remark 232 Note that the set-up that Hansen and Wright (1992) consider is
exactly the one we outlined above.
Chart 1 and 2 (from Hansen and Wright (1992)) shows the relation between
productivity and hours in the US data and in the standard model. The standard
model generates a positive relation between hours and productivity. In the
standard model, productivity shocks operate as labor demand shocks along a
stable labor supply curve. Furthermore, with the current calibration, the labor
supply is not very elastic, so the relation in Chart 2 is quite steep. The low
elasticity is responsible for the low volatility of output in Table 3. How can we
improve upon this? There are two obvious candidates:
increase the elasticity of labor supply.
introduce labor demand shocks.
Hansen and Wright (1992) discusses dierent ways to achieve these goals.
199
14.2.4
In the previous sections we analyzed how we can simulate articial data from a
stochastic dynamic general equilibrium model and looked at one possible way to
confront the model with the data (by looking at unconditional second moments).
The key step in this analysis was an approximation of the decision function
g(kt ; zt ). We approximated the value function V and the associated decision
rule g on a nite number of grid points. Obviously, this is not the only way to
nd an approximation for g:
An alternative way is to linearize the model dynamics around non-stochastic
steady state and try to nd a decision rule of the following sort
g(kt ; zt ) = akt + bzt :
In some (indeed one) particular cases such linear rules emerge without any
approximation.
Remark 233 Unfortunately, the notation changes as we go along in these
notes, e.g. here we use K to denote per capita capital stock while we used k
for this variable above.
Consider again the standard stochastic one-sector growth model. The output
is produced according to
Yt = Kt (At Lt )1 ;
where Y; K; A; and L are output, capital stock, labor-augmenting productivity
shock, and labor input, respectively. The law of motion for the capital stock is
given by
Kt = Kt (1
) + It
)Kt (At Lt ) At = (1
Kt
At Lt
At ;
and
Rt = 1 +
At Lt
Kt
1
X
u(Ct ; 1
t=0
200
Lt );
L) = ln(C) + b ln(1
L); b > 0:
= 1: Then,
Kt+1 = It = Yt
Ct ;
and
Rt =
At Lt
Kt
ln(Ct ) = ln( ) + ln Et
and substituting our guess we have:
ln [(1
st )Yt ] = ln( ) + ln Et
(1
Rt+1
:
st+1 )Yt+1
Substituting Rt , we have
ln(1
st )
ln(Yt ) = ln( ) + ln Et
st Yt (1
st+1 )
which becomes
ln(st )
ln(1
st ) = ln( ) + ln( ) + ln Et
ln(1
s) = ln( ) + ln( )
201
1
h
ln(1
1
:
st+1
i
1
st+1 =
s);
1
1 st+1 ;
and
(Kt (At Lt )1
Yt =
);
which can be written simply in terms of Kt+1 and At ; once we solve for Lt .
To nd Lt , note that the static FOC for Lt is
wt
Ct
=
;
1 Lt
b
Substituting Ct = (1
1
) + b(1
(1
s)
Lets put the pieces together now. First note that ln(Yt ) is
ln(Yt )
=
=
=
ln(Kt ) + (1
)(ln At + ln Lt )
ln(s) + ln(Yt 1 ) + (1
)(ln At + ln L)
ln(s) + ln(Yt 1 ) + (1
)(A + gt) + (1
(94)
)at + (1
) ln(L);
gt =
=
=
ln(s) + ln(Yt 1 )
gt + (1
)[A + ln(L)] + (1
ln(s) + [ln(Yt 1 ) g(t 1)] + (1
)[A + ln(L)]
ln(s) + (1
)[A + ln(L)]
g + [ln(Yt 1 ) g(t
|
{z
}
)at
g + (1
1)] + (1
gt = X + [ln(Yt )
g(t
1)] = X + [ln(Yt )
gt];
where we used the fact that ln(Yt ) ln(Yt 1 ) = g: Let ln(Yt ) be the value of
ln(Yt ) along a non-stochastic growth path, which is given by
ln(Yt ) =
+ gt:
1
Finally, let ybt be the deviations of ln(Yt ) from ln(Yt ); i.e.
Then,
ybt = ln(Yt )
ybt
ln(Yt ) = ln(Yt )
= ln(Yt
1)
X
1
202
gt:
g(t
1);
(95)
)at
)at :
which gives us
ln(Yt
1)
= ybt
X + g(t
1):
(96)
=
=
ln(s) + ybt
X + g(t
1
X gt
1
gt + (1
)gt gt + ybt
ybt 1 + (1
)at ;
1) + (1
+ (1
)[A + gt + at + ln(L)]
)at
Then, it can be shown that the deviations of the logarithm of the output from
its steady state follows an AR(2) process.
ybt
=
=
=
ybt 1 + (1
)( at 1 + "t )
ybt 1 + (b
yt 1
ybt 2 ) + (1
( + ) ybt 1
ybt 2 + (1
)"t
)"t :
Note that in this simple model, the linear decision rules allowed us to show that
ybt follows a particular stochastic process. Hence, linearized models provides us
with additional tools to confront the model with the data. Based on this simple
model, a natural question we can ask, for example, is: If the deviations of the
logarithm of GDP in the U.S. data follow an AR(2) process.
Method of Undetermined Coe cients Campbell (1994) analyzes a linearized version of the stochastic one-sector growth models that do not allow for
analytic solutions. His strategy is rst to linearize the model around a nonstochastic growth model. His basic set-up is very similar to the one we analyzed
above without the restriction of = 1.
The output is produced according to
Yt = (At Nt ) Kt1
) + Yt
Ct
(97)
and
ln(At ) =
ln(At ) + "t :
(98)
He start with an economy in which the labor input is xed, i.e. Nt = 1; and
identical agents who want to maximize
U = E0
1
X
i=0
203
1
i Ct+i
= Et Ct+1 Rt+1 :
(99)
1 kt
2 at
+ (1
1 )ct ;
(100)
kt+1 );
(101)
and
Et ct+1 =
3 Et (at+1
where x = ln(X) for any variables of interest, 1 ; 2 and 3 are constants that
depend on models parameters and = 1= :
Then equations (100), (101), and (98) dene a log-linear system. His strategy
is to arrive from these three equations to linear decision rules for kt+1 and ct+1
that have the following forms
kt+1 =
ct =
kk kt
ck kt
ka at ;
ca at :
The idea is nd these four coe cients (where the term the method of undetermined coe cients come from) that are consistent with (100), (101), and (98).
Once we have these linear rules then we can: 1) use them to simulate data
from our model economies (exactly as we did when we solve the model directly on
a grid), 2) use these linear decision rules to determines the statistical properties
that model variables has to follow (e.g. in this case yt is an ARMA(2,1) process).
Can we generalize this procedure? This is done by Christiano (1992). His
strategy is rst to show that a linearized stochastic model can be written as
#
" r
r 1
X
X
(102)
Et
i st+r 1 i = 0;
i zt+r 1 i +
i=0
i=0
where zt is an endogenous state variable (like capital stock) and st is an exogenous shock, and 0 s and 0 s are constants that depend on model parameters.
Suppose the shock follows an AR(1) process
st = st
+ "t :
(103)
Then, Christiano (1992) shows that (102) and (103) can be written as
zt = Azt
+ Bst ;
where A and B are some matrices (if zt contains more than 1 variable).
Remark 234 Obviously, this note constitutes a very preliminary introduction
to numerical methods that are used to study dynamic macroeconomic models.
You can check Judd (1998) and Marimon and Scott (1999) for more trough
analysis.
204
References
[1] Campbell, John. 1994. Inspecting the Mechanism. Journal of Monetary
Economics, 33, 463-506.
[2] Christiano, Lawrance J. 2001. Solving Dynamic Equilibrium Models by a
Method of Undetermined Coe cients. mimeo. Nortwestern University.
[3] Cooley, Thomas F. and Prescott, Edward C. (1995). Economic Growth
and Business Cycles. in Frontiers of Business Cycle Research, edited by
Thomas F. Cooley, Princeton University Press.
[4] Hansen, Gary. (1985). "Indivisible Labor and the Business Cycle. Journal
of Monetary Economics, 16, 309-327.
[5] Hansen, Gary and Wright, Randall. 1992. The Labor Market in Business
Cycle Theory Federal Reserve Bank of Minneapolis Quarterly Review,
Spring.
[6] Judd, Kenneth L. 1998. Numerical Methods in Economics. MIT University
Press.
[7] Marimon, Ramon and Scott, Andrew (eds.). 1999. Computational Methods
for the Study of Dynamic Economies. Oxford University Press.
[8] Prescott, Edward C. 1986. Theory Ahead of Business Cycle Measurement
Federal Reserve Bank of Minneapolis Quarterly Review, Fall.
[9] Tauchen, George. 1985. Finite State Markov Chain Approximations to Univariate and Vector Autoregressions,Economics Letters, vol. 20, pp. 177181.
205