You are on page 1of 74

Heterogeneous Agent Models

in Continuous Time, II
ECO 235: Advanced Macroeconomics
Benjamin Moll
Princeton

Stanford
February 12, 2014

1 / 74

Plan for Todays Lecture

Numerical solution of Huggett model

Background: diffusion processes, HJB equations, Kolmogorov


forward equations

Wealth Distribution and the Business Cycle


Do income and wealth distribution matter for the

macroeconomy?
better way (IMHO) of doing Krusell and Smith (1998)

2 / 74

Numerical Solution: Finite


Difference Methods

3 / 74

Finite Difference Methods


See http://www.princeton.edu/~moll/HACTproject.htm
Hard part: HJB equation. Easy part: KF equation.
Explain HJB using neoclassical growth model, easily

generalized to Huggett model

V (k) = max U(c) + V (k)[F (k) k c]


c

Functional forms

c 1
, F (k) = k
1
Use finite difference method. MATLAB code HJB_NGM.m
U(c) =

Approximate V (k) at I discrete points in the state space,

ki , i = 1, ..., I . Denote distance between grid points by k.

Shorthand notation

Vi = V (ki )
4 / 74

Finite Difference Approximations to V (ki )

Need to approximate V (ki ).


Three different possibilities:

Vi Vi 1
= Vi,B
k
Vi +1 Vi
V (ki )
= Vi,F
k
Vi +1 Vi 1
V (ki )
= Vi,C
2k
V (ki )

backward difference
forward difference
central difference

5 / 74

Finite Difference Approximations to V (ki )


Forward
Backward
!#
!

!#%$

!#$
Central

#$

#%$

"

6 / 74

Finite Difference Approximation

FD approximation to HJB is

Vi = U(ci ) + Vi [F (ki ) ki ci ]

()

where ci = (U )1 (Vi ), and Vi is one of backward, forward,


central FD approximations.
Two complications:
1

which FD approximation to use? Upwind scheme

() is extremely non-linear, need to solve iteratively:


explicit vs. implicit method

7 / 74

Which FD Approximation?
Which of these you use it extremely important

Correct way: use so-called upwind scheme. Rough idea:


forward difference whenever drift of state variable positive
backward difference whenever drift of state variable negative
In our example: define

si ,F = F (ki )ki (U )1 (Vi,F ),

Approximate derivative as follows

si ,B = F (ki )ki (U )1 (Vi,B )

Vi = Vi,F 1{si ,F >0} + Vi,B 1{si ,B <0} + Vi 1{si ,F <0<si ,B }

where 1{} is indicator function, and Vi = U (F (ki ) ki ).


Where does Vi term come from? Answer:
since V is concave, Vi,F < Vi,B (see figure) si ,F < si ,B

if si,F < 0 < si,B , set si = 0 V (ki ) = U (F (ki ) ki ), i.e.

were at a steady state.

Note: importantly avoids using the points i = 0 and i = I + 1

8 / 74

Iterative Method
Idea: Solve FOC for given V n , update V n+1 according to

Vin+1 Vin
+ Vin = U(cin ) + (V n ) (ki )[F (ki ) ki cin ] ()

Algorithm: Guess Vi0 , i = 1, ..., I and for n = 0, 1, 2, ... follow


1

Compute (V n ) (ki ) using FD approx. on previous slide.

Compute c n from cin = (U )1 [(V n ) (ki )]

Find V n+1 from ().

If V n+1 is close enough to V n : stop. Otherwise, go to step 1.

See MATLAB code

http://www.princeton.edu/~moll/HACTproject/HJB_NGM.m
Important parameter: = step size, cannot be too large

(CFL condition).
9 / 74

Convergence

Do we know this converges? yes if


1
2

upwind scheme done correctly


step size not too large

References that prove this (hard to read... but easy to cite!):


Barles and Souganidis (1991), Convergence of approximation

schemes for fully nonlinear second order equations.


Oberman (2006), Convergent Difference Schemes for

Degenerate Elliptic and Parabolic Equations: Hamilton-Jacobi


Equations and Free Boundary Problems.

10 / 74

Efficiency: Implicit Method


Pretty inefficient: I need 5,990 iterations (though super fast)
Efficiency can be improved by using an implicit method

Vin+1 Vin
+ Vin+1 = U(cin ) + (Vin+1 ) (ki )[F (ki ) ki cin ]

Each step n involves solving a linear system of the form

( +

1
)I


An V n+1 = U(c n ) +

1 n
V

Advantage: step size can be much larger more stable.


Implicit method preferable in general.
11 / 74

The Magic of Upwind Schemes


No boundary conditions are needed!
How is that possible? Dont we know from MAT 101 that

solving a first order ODE requires a boundary condition?

Answer: in effect, we used the steady state as a boundary

condition: V (k ) = U (c )

V (ki ) = U (c ),
ki = k
Vi +1 Vi
, ki < k
V (ki ) =
k
Vi Vi 1
, ki > k
V (ki ) =
k
But we did it without having to actually calculate steady
state (never use F (k ) = + equation). Code chooses
steady state by itself.
Upwind magic much more general: works for all stochastic

models do not even have steady state e.g. Huggett.

12 / 74

Deeper Reason: Unique Viscosity Solution


basically there is a sense in which the HJB equation for the

neoclassical growth model has a unique solution,


independent of boundary conditions

3 Theorems (references two slides ahead)


Theorem 1: HJB equation has unique viscosity solution
Theorem 2: viscosity solution equals value function, i.e.

solution to sequence problem.

Theorem 3: FD method converges to unique viscosity

solution, upwind scheme picks viscosity solution


Comments

viscosity solution = bazooka, not really needed, designed for

non-differentiable PDEs
Important for our purposes: uniqueness property
typically, viscosity solution = nice, concave solution.
Definition doesnt blow up, no convex kinks

13 / 74

Deeper Reason: Unique Viscosity Solution


basically there is a sense in which the HJB equation for the

neoclassical growth model has a unique solution,


independent of boundary conditions

3 Theorems (references two slides ahead)


Theorem 1: HJB equation has unique nice solution
Theorem 2: nice solution equals value function, i.e.

solution to sequence problem.

Theorem 3: FD method converges to unique nice

solution, upwind scheme picks nice solution


Comments

viscosity solution = bazooka, not really needed, designed for

non-differentiable PDEs
Important for our purposes: uniqueness property
typically, viscosity solution = nice, concave solution.
Definition doesnt blow up, no convex kinks

14 / 74

References for Theorems

Theorems 1 and 2 originally due to Crandall and Lions

(1983), Viscosity Solutions of Hamilton-Jacobi Equations.

somewhat more accessible: book by Bardi and

Capuzzo-Dolcetta (1997) Optimal Control and Viscosity


Solutions of Hamilton-Jacobi-Bellman Equations

Theorem 3 originally due to Barles and Souganidis (1991)

Convergence of approximation schemes for fully nonlinear


second order equations.

15 / 74

Deeper Reason: Unique Viscosity Solution


What do I mean by unique solution, independent of

boundary conditions?

Consider HJB equation for V (x) on [x0 , x1 ].


MAT 101: given a boundary condition V (x0 ) = , say, HJB

has a unique classical solution V classical (x)

Theorem 1 basically says: there is a unique such that

V classical (x) is nice.

Related: in neoclassical growth model and many other models


drift positive at lower end of state space
drift negative at upper end of state space
Since HJB equation is forward-looking, it cannot possibly

matter what happens at boundaries of state space:


the HJB equation does not see the boundary condition
exception: models with state constraints, e.g. Huggett. But

need boundary condition only if state constraint binds.


16 / 74

General Lesson from all this

the MAT 101 way is not the right way to think about HJB

equations, i.e. you dont want to think of them as ODEs


that require boundary conditions!

17 / 74

Numerical Solution of Huggett Model

http://www.princeton.edu/~moll/HACTproject.htm

18 / 74

Kolmogorov Forward Equation


Given HJB equation, solving KF equation is a piece of cake!
Recall: solving HJB equation using implicit scheme involves

solving linear system


( +

1
)I


An v n+1 = U(c n ) +

1 n
v

Key: matrix An encodes evolution of stochastic process


Stationary distribution simply solves

AT g = 0.
For details see section 2 of

http://www.princeton.edu/~ moll/HACTproject/numerical_MATLAB.pdf
19 / 74

Saving and Wealth Distribution given r

g1(a)

s2(a)

g2(a)

Savings , s i ( a)

D e ns it ie s , g i ( a)

s1(a)

We alt h, a

We alt h, a

20 / 74

Generalization: Continuum of z-Types

0.6

Savings s(a,z)

0.4
0.2
0
0.2
0.4
0.6
0.8
0

1.5
5
10

15
20
25

Wealth, a

0.5

Productivity, z

21 / 74

Generalization: Continuum of z-Types

22 / 74

Transition Dynamics

movie here

23 / 74

Generalization: Aiyagari, Transition

6.5

1.9

1.85

1.8
1.75
GDP, Y(t)

Capital, K(t)

5.5
5
4.5

1.7
1.65
1.6

4
1.55
3.5

1.5

1.45
0

10

15

20

25

30

35

40

45

50

10

15

Years, t

25

30

35

40

45

50

20
25
30
Years, t

35

40

45

50

Years, t

(a) Capital

(b) GDP

1.25

0.11
0.1
Interest Rate, r(t)

1.2

Wage, w(t)

20

1.15

1.1

1.05

0.09
0.08
0.07
0.06
0.05

0.95

0.04
0

10

15

20
25
30
Years, t

(c) Wage

35

40

45

50

10

15

(d) Interest Rate


24 / 74

Background: Diffusion Processes

25 / 74

Diffusion Processes
A diffusion is simply a continuous-time Markov process (with

continuous sample paths, i.e. no jumps)


Simplest possible diffusion: standard Brownian motion

(sometimes also called Wiener process)


Definition: a standard Brownian motion is a stochastic

process W which satisfies

W (t + t) W (t) = t t,

t N(0, 1),

W (0) = 0

Not hard to see

W (t) N(0, t)
Continuous time analogue of a discrete time random walk:

Wt+1 = Wt + t ,

t N(0, 1)
26 / 74

Standard Brownian Motion

Note: mean zero, E(W (t)) = 0...


... but blows up Var (W (t)) = t.

27 / 74

Brownian Motion
Can be generalized

x(t) = x(0) + t + W (t)


Since E(W (t)) = 0 and Var (W (t)) = t

E[x(t) x(0)] = t,

Var [x(t) x(0)] = 2 t

This is called a Brownian motion with drift and variance 2


Can write this in differential form as

dx(t) = dt + dW (t)

where dW (t) limt0 t t, with t N(0, 1)

This is called a stochastic differential equation


Analogue of stochastic difference equation:

xt+1 = t + xt + t ,

t N(0, 1)
28 / 74

29 / 74

Further Generalizations: Diffusion


Processes
Can be generalized further (suppressing dependence of x and

W on t)
dx = (x)dt + (x)dW
where and are any non-linear etc etc functions.
This is called a diffusion process
() is called the drift and () the diffusion.
all results can be extended to the case where they depend on

t, (x, t), (x, t) but abstract from this for now.


The amazing thing about diffusion processes: by choosing

functions and , you can get pretty much any


stochastic process you want (except jumps)
30 / 74

Example 1: Ornstein-Uhlenbeck Process


Brownian motion dx = dt + dW is not stationary (random

walk). But the following process is


dx = (
x x)dt + dW
Analogue of AR(1) process, autocorrelation e 1

xt+1 = x + (1 )xt + t
That is, we just choose

(x) = (
x x)
and we get a nice stationary process!
This is called an Ornstein-Uhlenbeck process
31 / 74

Ornstein-Uhlenbeck Process

Can show: stationary distribution is N(


x , 2 /(2))
32 / 74

Example 2: Moll Process

Design a process that stays in the interval [0, 1] and

mean-reverts around 1/2


(x) = (1/2 x) ,

(x) = x(1 x)

dx = (1/2 x) dt + x(1 x)dW


Note: diffusion goes to zero at boundaries (0) = (1) = 0 &

mean-reverts always stay in [0, 1]

33 / 74

Other Examples
Geometric Brownian motion:

dx = xdt + xdW
x [0, ), no stationary distribution:
log x(t) N(( 2 /2)t, 2 t).
Feller square root process (finance: Cox-Ingersoll-Ross)

dx = (
x x)dt + xdW

x [0, ), stationary distribution is Gamma(, 1/), i.e.


f (x) e x x 1 ,

= 2 x/ 2 ,

= 2 x/ 2

Other processes in Wong (1964), The Construction of a

Class of Stationary Markoff Processes.


34 / 74

Background: HJB Equations for


Diffusion Processes

35 / 74

Stochastic Optimal Control


Generic problem:

E0
V (x0 ) = max

u(t)t=0

e t h (x (t) , u (t)) dt

subject to the law of motion for the state


dx(t) = g (x (t) , u (t)) dt + (x(t))dW (t) and u (t) U
for t 0, x(0) = x0 given.
Deterministic problem: special case (x) 0.
In general x Rm , u Rn . For now do scalar case.
36 / 74

Stochastic HJB Equation: Scalar Case

Claim: the HJB equation is

1
V (x) = max h(x, u) + V (x)g (x, u) + V (x) 2 (x)
uU
2

Here: on purpose no derivation (cookbook)


In case you care, see any textbook, e.g. chapter 2 in Stokey

(2008)

37 / 74

Just for Completeness: Multivariate Case


Let x Rm , u Rn .

For fixed x, define the m m covariance matrix

2 (x) = (x)(x)
(this is a function 2 : Rm Rm Rm )

The HJB equation is

V (x) = max h(x, u)+


uU

In vector notation

m
X
i =1

1 XX 2
ij (x)ij V (x)
i V (x)gi (x, u)+
2
i =1 j=1


1
V (x) = max h(x, u)+x V (x)g (x, u)+ tr x V (x) 2 (x)
uU
2
x V (x): gradient of V (dimension m 1)
x V (x): Hessian of V (dimension m m).

38 / 74

HJB Equation: Endogenous and


Exogenous State
Lots of problems have the form x = (x1 , x2 )
x1 : endogenous state
x2 : exogenous state

dx1 = g (x1 , x2 , u)dt


dx2 =
(x2 )dt

(x2 )dW

Special case with

g (x) =

"

g (x1 , x2 , u)

(x2 )

(x) =

"

(x2 )

Claim: the HJB equation is

V (x1 , x2 ) = max h(x1 , x2 , u) + 1 V (x1 , x2 )


g (x1 , x2 , u)
uU

1
2 (x2 )
+2 V (x1 , x2 )
(x2 ) + 22 V (x1 , x2 )
2

39 / 74

Example 1: Real Business Cycle Model


V (k0 , A0 ) = max
E0

c(t)t=0

e t U(c(t))dt

subject to
dk = [AF (k) k c]dt

dA = (A)dt + (A)dW
for t 0, k(0) = k0 , A(0) = A0 given.
Here: x1 = k, x2 = A, u = c
h(x, u) = U(u)
g (x, u) = F (x) x u

40 / 74

Example 1: Real Business Cycle Model

HJB equation is

V (k, A) = max U(c) + k V (k, A)[AF (k) k c]


c

1
+ A V (k, A)(A) + AA V (k, A) 2 (A)
2

41 / 74

Example 2: Huggett Model

V (a0 , z0 ) = max
E0

c(t)t=0

e t U(c(t))dt

s.t.

da = [z + ra c]dt

dz = (z)dt + (z)dW
aa
for t 0, a(0) = a0 , z(0) = z0 given.
Here: x1 = k, x2 = z, u = c
h(x, u) = U(u)
g (x, u) = x2 + rx1 u
42 / 74

Example 2: Huggett Model

HJB equation is

V (a, z) = max U(c) + a V (a, z)[z + ra c]


c

1
+ z V (a, z)(z) + zz V (a, z) 2 (z)
2

43 / 74

Example 2: Huggett Model


Special Case 1: z is a geometric Brownian motion

dz = zdt + zdW
V (a, z) = max U(c) + a V (a, z)[z + ra c]
c

1
+ z V (a, z)z + zz V (a, z) 2 z 2
2
Special Case 2: z is a Feller square root process

dz = (
z z)dt + zdW
V (a, z) = max U(c) + a V (a, z)[z + ra c]
c

1
+ z V (a, z)(
z z) + zz V (a, z) 2 z
2
44 / 74

Background: Kolmogorov Forward


Equations

45 / 74

Kolmogorov Forward Equations


Let x be a scalar diffusion

dx = (x)dt + (x)dW ,

x(0) = x0

Suppose were interested in the evolution of the distribution

of x, f (x, t), and in particular in the limit limt f (x, t).


Natural thing to care about especially in heterogenous agent

models
Example 1: x = wealth
(x) determined by saving behavior and return to investments
(x) by return risk.
microfound later

Example 2: x = city size (Gabaix and others)


46 / 74

Kolmogorov Forward Equations


Fact: Given an initial distribution f (x, 0) = f0 (x), f (x, t)

satisfies the PDE


1
t f (x, t) = x [(x)f (x, t)] + xx [ 2 (x)f (x, t)]
2
This PDE is called the Kolmogorov Forward Equation
Note: in math this often called Fokker-Planck Equation
Can be extended to case where x is a vector as well.
Corollary: if a stationary distribution, limt f (x, t) = f (x)

exists, it satisfies the ODE


0=

d
1 d2 2
[(x)f (x)] +
[ (x)f (x)]
dx
2 dx 2
47 / 74

Just for Completeness: Multivariate Case

Let x Rm .
As before, define the m m covariance matrix

2 (x) = (x)(x)
The Kolmogorov Forward Equation is

t f (x, t) =

m
X
i =1

i [i (x)f (x, t)] +

1 XX
ij [ij2 (x)f (x, t)]
2
i =1 j=1

48 / 74

Application: Stationary Distribution of


Huggett Model
Recall Huggett model

V (a, z) = max U(c) + a V (a, z)[z + ra c]


c

1
+ z V (a, z)(z) + zz V (a, z) 2 (z)
2
Denote the optimal saving policy function by

s(a, z) = z + ra c
Then g (a, z, t) solves

t g (a, z, t) = a [s(a, z)g (a, z, t)]

1
z [(z)g (a, z, t)] + zz [ 2 (z)g (a, z, t)]
2
49 / 74

Wealth Distribution and the


Business Cycle

50 / 74

What We Do
1

Do income and wealth distribution matter for the


macroeconomy?

Do macroeconomic aggregates in heterogeneous agent


models with frictions behave like those in frictionless
representative agent models?

Ask these questions in an economy in which


heterogeneous producers
face collateral constraints
and fixed costs in production
51 / 74

Setup
Continuum of entrepreneurs, heterogeneous in wealth a and

productivity z

Continuum of productivity types, diffusion process


For now: no workers, occupational choice etc
Entrepreneurial preferences

E0

e t u(ct )dt

52 / 74

Technologies
Two technologies: productive and unproductive

y = fu (z, A, k) = zAf(k),
(
zAB f(k ),
y = fp (z, A, k) =
0,

k
k < ,

B > 1, but fixed cost > 0


f strictly increasing and concave, but fp non-convex

Sources of uncertainty:
z: idiosyncratic shock, diffusion process
A: aggregate shock, two-state Poisson process
53 / 74

Setup
Collateral constraints

k a,

1.

Profit maximization:

(a, z, A; r ) = max{u (a, z, A; r ), p (a, z, A; r )}


j (a, z, A; r ) = max fj (z, A, k) (r + )k,
ka

Entrepreneurs solve

max E0
{ct }

e t u(ct )dt

j = p, u.

s.t.

dat = [(at , zt , At ; rt ) + rt at ct ]dt


54 / 74

Plan

model without aggregate shocks, At = 1

model with aggregate shocks, At {A , Ah }, Poisson

55 / 74

Without Aggregate Shocks


Capital market clearing:
Z
[ku (a, z;r (t))1{u >p } + kp (a, z; r (t))1{u <p } ]g (a, z, t)dadz
Z
= ag (a, z, t)dadz
v (a, z, t) = max u(c) + a v (a, z, t)[(a, z; r (t)) + r (t)a c]
c

(EQ)

(HJB)

1
+ z v (a, z, t)(z) + zz v (a, z, t) 2 (z) + t v (a, z, t)
2
t g (a, z, t) = a [s(a, z, t)g (a, z, t)] z [(z)g (a, z, t)]
1
+ zz [ 2 (z)g (a, z, t)],
2
s(a, z, t) = (a, z; r (t)) + r (t)a c(a, z, t)

(KFE)

Given initial condition g0 (a, z), the two PDEs (HJB) and (KFE)
together with (EQ) fully characterize equilibrium.

56 / 74

Dynamics of Wealth Distribution

movie here

57 / 74

Wealth,a

Pr
o

du
ct

iv
it

y,
z

Density,g(a,z,t)

Dynamics of Wealth Distribution

58 / 74

With Aggregate Shocks


At {A , Ah }, Poisson with intensities , h
As in discrete time: necessary to include entire wealth

distribution as state variable

Aggregate state: (Ai , g ), i = , h


Optimal saving policy function: si (a, z, g ), i = , h
Interest rate ri (g ), i = , h
Capital demands: ki ,u (a, z, g ; r ) and ki ,p (a, z, g ; r ), i = , h

Equilibrium interest rate ri (g ) solves


Z
[ki ,u (a, z, g ;ri (g ))1{u >p } + ki ,p (a, z, g ; ri (g ))1{u <p } ]g (a, z)dadz
Z
= ag (a, z)dadz, i = , h
59 / 74

Entrepreneurs Problem

Useful to write law of motion of distribution as

t g (a, z, t) = T [g (, t), si (, t)](a, z)


where T is the Kolmogorov Forward operator
T [g , si ](a, z) = a [si (a, z, g )g (a, z)]

1
z [(z)g (a, z)] + zz [ 2 (z)g (a, z)]
2

60 / 74

Monster Bellman
Entrepreneurs problem: recursive formulation
Vi (a, z, g ) = max u(c) + a Vi (a, z, g )[i (a, z, g ) + ri (g )a c]
c

1
+ z Vi (a, z, g )(z) + zz Vi (a, z, g ) 2 (z)
2
+ i [Vj (a, z, g ) Vi (a, z, g )]
Z
Vi (a, z, g )
T [g , si ](a, z)dadz.
+
g (a, z)
Vi /g (a, z): functional derivative of Vi wrt to g at (a, z).
Saving policy function

si (a, z, g ) = i (a, z, g ) + ri (g )a ci (a, z, g )

= i (a, z, g ) + ri (g )a (u )1 (a Vi (a, z, g )).

Recursive equilibrium: solution to these two equations.


61 / 74

Approximation Method: Basic Idea


1

use discrete time approximation to A process

consider only finitely many shocks

keep track of all possible histories

between shocks: same (HJB) and (KFE) as without shocks,


one system for each history

then piece them together in correct way

Example:
5 shocks at times t = 5, 10, 15, 20, 25
25 = 32 histories

62 / 74

Histories with 8 Shocks every 20 Periods


3.2
3
2.8
2.6
2.4
2.2
2
1.8
1.6
1.4
0

20

40

60

80

100

120

140

160

180

200

Obvious problem: blows up on you, e.g. 28 = 256.


Working on: using this approximation as input into fancier

approximation scheme

but the economics seems robust to me, will bet anyone who

thinks that things will change a lot

63 / 74

Approximation Method: Details


N shocks hit at times tn = n, n = 0, 1, 2, ..., N
Approximate Poisson process

Pr(An+1 = Ai |An = Ai ) = 1 e i
Pr(An+1 = Aj |An = Ai ) = e i
converges to Poisson as 0 and N .
Denote histories by An = (A0 , ..., An ) where An {A , Ah }
Keep track of all

v (a, z, t, An ) and

g (a, z, t, An )

64 / 74

Approximation Method: Details

Between shocks: for all histories An

v (a, z, t, An ) = max u(c) + a v (a, z, t, An )[(a, z, t, An ) + r (t, An )a c]


c

1
+ z v (a, z, t, An )(z) + zz v (a, z, t, An ) 2 (z) + t v (a, z, t, A
2
t g (a, z, t, An ) = a [s(a, z, t, An )g (a, z, t, An )]
1
z [(z)g (a, z, t, An )] + zz [ 2 (z)g (a, z, t, An )]
2

Boundary conditions: at all times when shocks hit tn :






X
Pr(An+1 |An )v a, z, tn+1 , An+1
v a, z, tn+1 , An =
An+1 =A ,Ah





g a, z, tn+1 , An+1 = g a, z, tn+1 , An ,

all An .

Note: uses
1 people forward-looking, form expectations over all branches
2 continuity of g with respect to t (state variable)

65 / 74

The Experiments
Experiment 1: comparison to rep. agent model
two economies: economy with frictions and rep. agent
at t = 0, both in steady state (corresponding to no A shocks)
at t = 10, hit with A shock, compare impulse responses

Experiment 2: effect of wealth distribution


two economies with frictions: one more unequal than the other
at t = 0, both in steady state (corresponding to no A shocks)
at t = 10, hit with A shock, compare impulse responses

66 / 74

The Corresponding Representative Agent


RBC model with utility function u(c) and following

aggregate production function


Z
F (K , A) = max
max{fu (z, A, k(z)), fp (z, A, k(z))}(z)dz
{k(z)}
Z
k(z)(z)dz K

s.t.

where (z) is the stationary distribution of the z-process.


Solution:

F (K , A) = max AZ (
z )(K (1 (
z )))
z
Z z
1
Z
1
1
1
z 1 (z)dz + B 1
Z (
z)
z 1 (z)dz
0

67 / 74

Parameterization
Functional forms

u(c) =

c 1
,
1

f(k) = k

Productivity process: Ornstein-Uhlenbeck

d log zt = log zt +
dWt
but reflected at z, z 0.
Parameter values

B = 1.8,

= 0.6,

= 0.05,

= 2,

Ah = 1,
a = 0,

= 0.05,
= 0.2,

= 6,

= 2,

z = 0.1

A = 0.95
a = 30,

z = 0.6,

z = 1.4,

T = 100
68 / 74

Frictions much Slower Recovery


( b) Capit al St ock, K t

( a) Pr oduc t ivity, A t
1.01

1
1

0.99

0.99

0.98

0.98

0.97

0.97

0.96

0.96

0.95
Frictionless
with Frictions

0.95
0.94
0

10

20

30

40

Frictionless
with Frictions

0.94
0.93
0

10

20

30

40

Ye ar s

Ye ar s
( c ) GD P, Y t

( d) Int e r e s t Rat e , r t

1.02

0.055

0.05

0.98
0.045
0.96
0.04

0.94
0.92
0

Frictionless
with Frictions
10

20

30
Ye ar s

40

Frictionless
with Frictions
0

10

20

30

40

Ye ar s

Note: GDP and interest rate computations currently imprecise

69 / 74

Dynamics of Wealth Distribution

movie here

70 / 74

Wealth Distribution at Peak

Pr
od
u

ct
iv

it
y,

Density,g(a,z,t)

Year = 10

Wealth,a

71 / 74

Wealth Distribution at Trough

Pr
od
u

ct
iv

it
y,

Density,g(a,z,t)

Year = 20

Wealth,a

72 / 74

Wealth Distribution 10 Years after Trough

Pr
od
u

ct
iv

it
y,

Density,g(a,z,t)

Year = 30

Wealth,a

73 / 74

Next Steps
Redistributive policies?
Add workers, occupational choice
Fancier approximation
computations suggest: densities live in relatively
low-dimensional space
project densities on that space
use as state variable in value function

What else?

74 / 74

You might also like