---
tags: math, memo, macro
---
# Dynamic Programming in Macroeconomics
## Motivation
### Problem
Suppose an agent who can live forever wants to optimize her utility as the following problem,
\begin{gather*}
\sup_{\{x_t\}_{t=0}^\infty} \sum_{t=0}^\infty \beta^t u(x_t, x_{t+1})\\
\text{s.t. } x_{t+1} \in \Gamma(x_t), t=0, 1, 2, \dots\\
x_0 \in X \text{ given.}
\end{gather*}
First, she has an exponential discounting utility,
$$
U=\sum_{t=0}^\infty \beta^t u_t,
$$
the total utility is the weighted average of instantaneous utility, $u_t$, in each period.
Second, at period $t$, she can choose $x_t$ from the set $\Gamma(x_{t-1})$. This decision will jointly decide her utility at period $t, t+1$ and affect $\Gamma(x_{t})$, the choice set of the next period.
Third, $x_0$ is given, so her first choice is picking $x_1$ among $\Gamma(x_{0})$.
Clearly, the solution to this problem is a sequence, $\{x_i\}_{i=1}^\infty$, and depends on $x_0$. How can we solve this problem for her?
### Similarity
One observation is that every period is **similar** except $x_0$. At period $1$, the agent needs to solve,
$$
\sup_{\{x_t\}_{t=0}^\infty} \sum_{t=0}^\infty \beta^t u(x_t, x_{t+1}),
$$
given $x_0$. At period $2$, the problem becomes,
$$
\sup_{\{x_t\}_{t=1}^\infty} \sum_{t=1}^\infty \beta^{t-1} u(x_t, x_{t+1}),
$$
given $x_1$. The exponential discounting utility ensures the agent is time consistent, she will choose $x_{t+k}$ at period $t+k$ just at she planned at period $t$. Hence, we can rewrite this problem in another way.
### Dynamic Programming
Suppose the following function exists,
$$
v(x) = \sup_{y \in \Gamma(x)} [u(x, y) + \beta v(y)] \quad \forall x \in X
$$
It should have the following property:
$$
v(x_t) = u(x_t, x_{t+1}) + \beta v(x_{t+1}), \quad t=0, 1, 2, \dots
$$
It can be shown that under certain condition, $v$ exists and is the value function of the optimization problem,
\begin{gather*}
v(x_0) = \sup_{\{x_t\}_{t=0}^\infty} \sum_{t=0}^\infty \beta^t u(x_t, x_{t+1})\\
\text{s.t. } x_{t+1} \in \Gamma(x_t), t=0, 1, 2, \dots\\
x_0 \in X \text{ given,}
\end{gather*}
which can help us solve the problem. However, we need to prove $v$ exists first.
## Contraction mapping theorem
The contraction mapping theorem is also called Banach's fixed point theorem.
### Contraction mapping
Let $(X,d)$ be a complete metric space. Then a map $T : X \to X$ is called a contraction mapping on $X$ if there exists $\beta \in [0,1)$ such that
$$
d\left(T(x), T(y) \right) \le \beta d(x, y)
$$
for all $x,y \in X.$
### Statement
Let $(X,d)$ be a non-empty complete metric space with a contraction mapping $T:X\to X.$ Then $T$ admits a unique fixed-point $x^{*}$ in $X$ (i.e. $T(x^{*})=x^{*}$). Furthermore, $x^{*}$ can be found as follows: start with an arbitrary element $x_{0}\in X$ and define a sequence $\{x_n\}_{n=0}^\infty$ as
$$x_{n}=T(x_{n-1}) \text{ for } n\ge 1.$$
Then
$$\lim _{n\to \infty }x_{n}=x^{*}.$$
<details>
<summary>
Proof:
</summary>
Arbitrarily pick $x_0 \in X$, and construct $\{x_n\}_{n=0}^\infty$ as above description. We want to show this sequence is a Cauchy sequence. First, we have the inequality,
$$
d(x_{n+1},x_{n})\leq \beta^{n}d(x_{1},x_{0})
$$
Let $m,n\in \mathbb {N}$ such that $m > n$,
\begin{aligned}
d(x_{m},x_{n})&\leq d(x_{m},x_{m-1})+d(x_{m-1},x_{m-2})+\cdots +d(x_{n+1},x_{n})\\
&\leq \beta^{m-1}d(x_{1},x_{0}) + \beta^{m-2}d(x_{1},x_{0})+\cdots + \beta^{n}d(x_{1},x_{0})\\
&=\beta^{n}d(x_{1},x_{0})\sum _{k=0}^{m-n-1}\beta^{k}\\
&\leq \beta^{n}d(x_{1},x_{0})\sum _{k=0}^{\infty }\beta^{k}\\
&=\beta^{n}d(x_{1},x_{0})\left({\frac {1}{1-\beta}}\right),\end{aligned}
where the first inequality is based on the triangular inequality of metric, the second inequality is by the inequality of contraction mapping, the third inequality holds since $\beta \ge 0$.
Let $\varepsilon > 0$ be arbitrary. Since $\beta \in [0, 1)$, we can find a large enough $N \in \mathbb{N}$ so that
$$\beta^{N}<{\frac {\varepsilon (1-\beta)}{d(x_{1},x_{0})}}.$$
Therefore, by choosing $m$ and $n$ greater than $N$ we may write:
$$
d(x_{m},x_{n})\leq \beta^{n}d(x_{1},x_{0})\left({\frac {1}{1-\beta}}\right)<\left({\frac {\varepsilon (1-\beta)}{d(x_{1},x_{0})}}\right)d(x_{1},x_{0})\left({\frac {1}{1-\beta}}\right)=\varepsilon.
$$
This proves that the $\{x_n\}_{n=0}^\infty$ is a Cauchy sequence. By completeness of $(X,d)$, the sequence has a limit $x^{*}\in X.$ Furthermore, $x^{*}$ must be a fixed point of T:
$$ x^{*}=\lim _{n\to \infty }x_{n}=\lim _{n\to \infty }T(x_{n-1})=T\left(\lim _{n\to \infty }x_{n-1}\right)=T(x^{*}).$$
As a contraction mapping, $T$ is continuous, so bringing the limit inside $T$ was justified. Lastly, $T$ cannot have more than one fixed point in $(X,d)$, since any pair of distinct fixed points $p_1$ and $p_2$ would contradict the contraction of $T$:
$$ d(T(p_{1}),T(p_{2}))=d(p_{1},p_{2})>\beta d(p_{1},p_{2}).$$
</details>
## Blackwell's Sufficiency Condition
It is non-tritivial to check whether a transformation is a contraction mapping or not. The following theorem can help us ease the burden. We need an additional definition for the comparisona in functional spaces.
### Comparison
Let $B(X)$ be a space of functions define on $X$. For any $f, g \in B(X)$, we say $f \ge g$ if $f(x) \ge g(x)$ for all $x \in X$.
### Statement
Let $X \subset \mathbb{R}^n$ and $B(X)$ is the space of functions on $X$ with sup norm $\|\cdot \|$ . Suppose $T: B(X) \to B(X)$ is an operator such that
1. $T$ is **monotone**: For any $f, g \in B(X)$, $f \ge g$ implies $T(f) \ge T(g)$.
2. $T$ **discounts constant functions**: For any $f \in B(X)$ and a constant function $A$, we have
$$T(f + A) \le Tf + \beta A \text{ for some } 0 \le \beta <1.$$
Then $T$ is a contraction mapping with $\beta$.
<details>
<summary>
Proof:
</summary>
Based on sup norm, for all $f, g \in B(X)$ , $f \le g + d(f, g)$. Applying two properties of $T$, we have
$$T(f) \le T (g + d(f, g)) \le T (f) + \beta d(f, g).$$
Exchanging the roles of $f$ and $g$ and using the same logic implies
$$T (f) \le T (g) + \beta d(f, g).$$
Combining these two inequalities gives $|T (f) − T (g)| \le \beta d(f, g)$ or $d(T (f), T (g)) \le \beta d(f, g)$.
Note: this proof can be applied to general functional spaces with sup norm. However, we only consider bounded and continuous functions since we need the space to be complete.
</details>
## Model
### Notation
- $X$: the set of possible values for the state variable $x$
- $\Gamma: X \to X$: $\Gamma(x)$ is the set of feasible values for the state variable next period if the current state is $x$.
- $A$: the graph of $\Gamma$, i.e. $$A= \{(x, y) \in X \times X: y \in \Gamma(x)\}.$$
- $u: A \to \mathbb{R}$: the one-period return function.
- $\beta$: the (stationary) discount factor.
### Bellman operator
$$
(Tf)(x) = \max_{a \in \Gamma(x)} \{ u(x, a) + \beta f(a)\},
$$
### Value function
If the functional space is complete and the Bellman operator is a contraction mapping, by the contraction mapping theorem/Banach fixed point theorem, there is a unique $v$ such that $Tv = v$. The value function $v$ is also a limit of Bellman operation, i.e.,
$$
v = \lim_{k \to \infty} T^k(f).
$$
### Policy function
We use $\pi$ to denote the policy function, which is the argument in the optimal question.
## Existence
Let $C$ be the space of continuous and bounded real-valued functions endowed with the sup-norm, which ensures $C$ is a complete metric space. The following proof applies the maximum theorem.
Suppose
1. $x\in X \subset \mathbb{R}^l$ and $X$ is a convex set.
2. $u \in C$.
3. $\beta \in (0,1)$
4. $\Gamma$ is a non-empty, continuous (u.h.c and l.h.c.), and compact-valued correspondence.
5. $f \in C$.
Then $T$ is a contraction mapping.
## Reference
https://en.wikipedia.org/wiki/Maximum_theorem
https://en.wikipedia.org/wiki/Banach_fixed-point_theorem
https://en.wikipedia.org/wiki/Uniform_norm
Stokey, N. L. (1989). Recursive methods in economic dynamics. Harvard University Press.
Ljungqvist, L., & Sargent, T. J. (2018). Recursive macroeconomic theory. MIT press.