1. In an in finite horizon capital/consumption model, if kt and ct are the capital stock and consumption at time t, we have f(kt) = ct+kt+1 for t ≥ 0 where f is a given production function, and the total utility to be maximized is
where U is a given period utility function and β ? (0; 1) is a discount factor. Rephrase this as a standard (in finite horizon) control problem and write its Bellman equation.
2. Consider the discrete time control problem:
subject to x0 = x; xt+1 = g(t; xt; ut) for t = 0; : : : ; T - 1 (here f; g are C1, xt; ut ? R, x ? R given). Rewrite this as a Lagrangian optimization problem with 2T +2 variables (x0; : : : ; xT ; u0; : : : ; uT ) and T + 1 constraints. By applying the Lagrange condition to this problem, recover the maximum principle for the control problem (necessary conditions).
3. Consider the problem
subject to the initial and terminal conditions x0 = a; xT = b. One may think of it as a control problem by setting ut = xt+1-xt. Find the minimum and the optimal x *0 ; : : : ; x*T in two ways: directly (eg by Lagrangian method); and by writing the fundamental equation of dynamic programming for and computing Js(x) by backwards induction.
4. Consider the dynamic programming problem with \extended memory":
subject to xt+1 = g(t; xt; xt-1; ut) (x0; x-1 are given). Rephrase as a standard dynamic programming problem (with twice as many state variables).