Friday, January 04, 2008

Taro on Dynamic Optimization (2)

Example 1: Static Optimization(intratemporal optimization)

Let x be a good you can choose, and p and m be the price of the good and the income you have respectively. (In this case, it is assumed that the income is given from heaven. If you want to make m earned by working, you must include the "labor term" in this model. )

max U(x)

s.t. px≦m


The Lagrangian you can use is:

L(x, m, λ) = U(x)-λ(px-m)

Or you can express it in another way,

V(p, m)=max{ U(x) s.t. px≦m }

I like the last one, though. This is generally called the "Value function"(or "Indirect utility function"). Then, we solve for the choice, x(t)=x(p,m) and it is called the "(Marshallian)Demand function".


Example 2: Dynamic Optimization(intertemporal optimization)

Let x(t) be a good you can choose at time t (t=0,1,..,T-1), and p and m(t) be the price of the good and the income you have at each time t respectively. p and m(t) are assumed to be constant for simplicity.


max U(x(0))+U(x(1))+U(x(2))+....+U(x(T-1))

s.t. m(t+1)-m(t)=-px(t), where m(0)=given



The Lagrangian is:

L(x(t), m(t), λ(t)) = U(x(0))+U(x(1))+U(x(2))+....+U(x(T-1))

+λ(0)(px(0)-m(0)-m(1))+λ(1)(px(1)-m(1)-m(2))+....

+λ(T-1)(px(T-1)-m(T-1)-m(T))

Or this can be expressed in another way,

V(m(t))=max{ U(x(t))+V(m(t+1)) s.t. m(t+1)-m(t)=-px(t) }

I also love the last one and it is generally called the "Bellman equation".
And we solve for the function, x(t)=x(m(t)) and it is called the "policy function".

(I will try to talk later about the "recursive way" to solve the problem of the dynamic optimization.)

As you can see, the problem of intertemporal optimal choice is the extension of that of intratemporal optimal choice in that we have as many choices as we live longer in the problem of intertemporal choice.

In other words, in the problem of dynamic optimization we can see different goods as goods at different points of time we try to enjoy even if these goods are the same ones. If you want to learn about the basic optimization problem, please drop in at my previous post.

(Appendix)
Regarding the problem of the dynamic optimization, we also use the following function;

L(x(t), m(t), λ(t)) = U(x(0))+U(x(1))+U(x(2))+....+U(x(T-1))

+λ(0)(px(0)-m(0)-m(1))+λ(1)(px(1)-m(1)-m(2))+....

+λ(T-1)(px(T-1)-m(T-1)-m(T))

Arranging the above function,

L(x(t), m(t), λ(t)) = U(x(0))+λ(0)px(0)+U(x(1))+λ(1)px(1)

+U(x(2))+λ(2)px(2)+....+U(x(T-1))+λ(T-1)px(T-1)

+λ(0)(-m(0)-m(1))+λ(1)(-m(1)-m(2))+....+λ(T-1)(-m(T-1)-m(T))

and letting the term, U(x(t))+λ(t)px(t) be the new term,
H(x(t), m(t), λ(t)),


L(x(t), m(t), λ(t)) = H(x(0), m(0), λ(0))+H(x(1), m(1), λ(1))

+H(x(2), m(2), λ(2))+....+H(x(T-1), m(T-1), λ(T-1))

+λ(0)(-m(0)-m(1))+λ(1)(-m(1)-m(2))+....+λ(T-1)(-m(T-1)-m(T))

H(x(t), m(t), λ(t)) is, as you know, "the Hamiltonian function".

No comments: