---
tags: math, macro
---
<!---
{%hackmd hackmd-dark-theme %}
-->
# Characteristics of Value Functions in Dynamic Programming
## Model
### Notation
- $X$: the set of possible values for the state variable $x$
- $\Gamma: X \to X$: $\Gamma(x)$ is the set of feasible values for the state variable next period if the current state is $x$.
- $A$: the graph of $\Gamma$, i.e. $$A= \{(x, y) \in X \times X: y \in \Gamma(x)\}.$$
- $u: A \to \mathbb{R}$: the one-period return function.
- $\beta$: the (stationary) discount factor.
### Bellman operator
$$
(Tf)(x) = \max_{a \in \Gamma(x)} \{ u(x, a) + \beta f(a)\},
$$
### Value function
If the functional space is complete and the Bellman operator is a contraction mapping, by the contraction mapping theorem/Banach fixed point theorem, there is a unique $v$ such that $Tv = v$. The value function $v$ is also a limit of Bellman operation, i.e.,
$$
v = \lim_{k \to \infty} T^k(f).
$$
### Policy function
We use $\pi$ to denote the policy function, which is the argument in the optimal question.
## Continuity
Let $C$ be the space of continuous and bounded real-valued functions endowed with the sup-norm, which ensures $C$ is a complete metric space. The following proof applies the maximum theorem.
### Statement
Suppose
1. $x\in X \subset \mathbb{R}^l$ and $X$ is a convex set.
2. $u \in C$.
3. $\beta \in (0,1)$
4. $\Gamma$ is a non-empty, continuous (u.h.c and l.h.c.), and compact-valued correspondence.
5. $f \in C$.
Then $Tf \in C$.
<details>
<summary>
Proof:
</summary>
For the boundedness, both $u, \beta f$ are bounded, i.e., $\sup \{|u|, |\beta f|\} < M$. Clearly, $|Tf| < 2 M$, so $T$ maps bounded functions into bunded functions.
For continuity, we want to apply the Maximum theorem, so we need to show the objective function,
$$
u(x, a) + \beta f(x, a),
$$
is continuous. Since $u, \beta f$ are both continuous, the sum is also continuous. Because $\Gamma$ is compact-valued and continuous, according to the Maximum theorem, the "value function" $Tf$ is also continuous.
</details>
### Maximum theorem
<details>
<summary>
Statement:
</summary>
Let the objective function $h: X \times \Theta \to \mathbb{R}$ be a continuous function, and the set of feasible values $\Gamma: \Theta \rightrightarrows X$ be a compact-valued correspondence such that $\Gamma(\theta) \neq \emptyset$ for all $\theta \in \Theta$. The value function,
$$
v(\theta) = \sup \{h(x, \theta): x \in \Gamma(\theta)\}
$$
is continuous if $\Gamma$ is continuous (i.e., both upper and lower hemicontinuous) at $\theta$.
</details>
### Corollary
With the above assumption, the value function is also bounded and continuous. It is the direct result of the contraction mapping theorem on the complete metric space $C$.
## Increasingness
Let $D$ be the space of (weakly) increasing, continuous, and bounded real-valued functions endowed with the sup-norm, which ensures $D$ is a complete metric space.
### Statement
Suppose
1. $x\in X \subset \mathbb{R}^l$ and $X$ is a convex set.
2. $u \in C$.
3. $\beta \in (0,1)$
4. $\Gamma$ is a non-empty, continuous (u.h.c and l.h.c.), and compact-valued correspondence.
6. $u(x,a)$ is increasing in $x$.
7. $\Gamma(x)$ is increasing in the following sense $$\forall x, x' \in X \text{ s.t. } x \le x' \implies \Gamma(x) \subseteq \Gamma(x')$$
5. $f \in D$.
Then $Tf \in D$.
<details>
<summary>
Proof:
</summary>
Let $x_1, x_2 \in X$ such that $x_1 \le x_2$, and $a_1$ be the corresponding optimal solution when the state is given by $x_1$.
By the increasing assumption of $\Gamma$,
$$
a_1 \in \Gamma(x_1) \subseteq \Gamma(x_2)
$$
By the incresaing assumption of $u$ and $f$,
$$
Tf(x_1)=u(x_1, a_1) + \beta f(x_1,a_1) \le u(x_2, a_1) + \beta f(x_2,a_1)\le Tf(x_2),
$$
so $Tf$ is an increasing function.
</details>
### Corollary
With the above assumption, the value function is also bounded, continuous, and increasing. It is the direct result of the contraction mapping theorem on the complete metric space $D$.
## Strictly Increasingness
Let $D'$ be the space of strictly increasing, continuous, and bounded real-valued functions endowed with the sup-norm. However, $D'$ is **not complete**.
To prove the value function $v$ is strictly increasing, we first prove that $T$ maps an increasing function into a strictly increasing function. Then apply the contraction mapping theorem.
### Statement
Suppose
1. $x\in X \subset \mathbb{R}^l$ and $X$ is a convex set.
2. $u \in C$.
3. $\beta \in (0,1)$
4. $\Gamma$ is a non-empty, continuous (u.h.c and l.h.c.), and compact-valued correspondence.
6. $u(x,a)$ is **strictly** increasing in $x$.
7. $\Gamma(x)$ is increasing in the following sense $$\forall x, x' \in X \text{ s.t. } x \le x' \implies \Gamma(x) \subseteq \Gamma(x')$$
5. $f \in D$.
Then $Tf \in D'$.
<details>
<summary>
Proof:
</summary>
Let $x_1, x_2 \in X$ such that $x_1 < x_2$, and $a_1$ be the corresponding optimal solution when the state is given by $x_1$.
By the increasing assumption of $\Gamma$,
$$
a_1 \in \Gamma(x_1) \subseteq \Gamma(x_2)
$$
By the incresaing assumption of $u$ and $f$,
$$
Tf(x_1)=u(x_1, a_1) + \beta f(x_1,a_1) < u(x_2, a_1) + \beta f(x_2,a_1)\le Tf(x_2),
$$
so $Tf$ is a strictly increasing function.
</details>
### Corollary
Because $D$ is a complete metric space, by the contraction mapping theorem, there is a unique $v \in D$ such that $Tv =v$. By the above proof, $Tv$ is a strictly increasing function, so the value function $v$ is strictly increasing.
## Concavity
Let $E$ be the space of (weakly) concave, continuous, and bounded real-valued functions endowed with the sup-norm, which ensures $E$ is a complete metric space.
### Statement
Suppose
1. $x\in X \subset \mathbb{R}^l$ and $X$ is a convex set.
2. $u \in C$.
3. $\beta \in (0,1)$
4. $\Gamma$ is a non-empty, continuous (u.h.c and l.h.c.), and compact-valued correspondence.
6. $u(x,a)$ is concave in $(x,a)$.
7. $\Gamma(x)$ is convex in the following sense \begin{align*}\forall \theta \in [0,1], \forall a_1 \in \Gamma(x_1), \forall a_2 \in \Gamma(x_2)\\ \theta a_1 + (1-\theta) a_2 \in \Gamma(\theta x_1 + (1-\theta) x_2)\end{align*}
8. $f \in E$.
Then $Tf \in E$.
<details>
<summary>
Proof:
</summary>
Let $x_1, x_2 \in X$ and $a_1, a_2$ be the corresponding optimal solutions when the state is given by $x_1, x_2$. Given $\theta \in [0,1]$, definie $x_3, a_2$ as the convex combinations, i.e.
\begin{align*}
x_3 &= \theta x_1 + (1-\theta) x_2, \\
a_3 &= \theta a_1 + (1-\theta) a_2.
\end{align*}
We want to show that
$$
Tf(x_3) \ge \theta Tf(x_1) + (1-\theta) Tf(x_2).
$$
By definition,
\begin{align*}
Tf(x_1) &=u(x_1, a_1) + \beta f(x_1,a_1) \\
Tf(x_2) &=u(x_2, a_2) + \beta f(x_2,a_2)
\end{align*}
By the convex assumption of $\Gamma$,
$$ a_3 \in \Gamma(x_3). $$
By the concave assumption of $u$,
$$
u(x_3,a_3) \ge \theta u(x_1, a_1) + (1-\theta) u(x_2, a_2)
$$
By the concave assumption of $f$,
$$
\beta f(x_3,a_3) \ge \theta \beta f(x_1, a_1) + (1-\theta) \beta f(x_2, a_2)
$$
Combing the above results,
$$
Tf(x_3, a_3) \ge u(x_3,a_3) + \beta f(x_3,a_3) =\theta Tf(x_1) + (1-\theta) Tf(x_2)
$$
so $Tf$ is a concave function.
</details>
### Corollary
With the above assumptions, the value function is also bounded, continuous, and concave since $E$ is a complete metric space.
## Strictly Concavity
Let $E'$ be the space of strictly concave, continuous, and bounded real-valued functions endowed with the sup-norm. However, $E'$ is **not complete**.
To prove the value function, $v$, is strictly concave, we first prove that $T$ maps a concave function into a strictly concave function. Then apply the contraction mapping theorem.
### Statement
Suppose
1. $x\in X \subset \mathbb{R}^l$ and $X$ is a convex set.
2. $u \in C$.
3. $\beta \in (0,1)$
4. $\Gamma$ is a non-empty, continuous (u.h.c and l.h.c.), and compact-valued correspondence.
6. $u(x,a)$ is **strictly** concave in $(x,a)$.
7. $\Gamma(x)$ is convex in the following sense \begin{align*}\forall \theta \in [0,1], \forall a_1 \in \Gamma(x_1), \forall a_2 \in \Gamma(x_2)\\ \theta a_1 + (1-\theta) a_2 \in \Gamma(\theta x_1 + (1-\theta) x_2)\end{align*}
8. $f \in E$.
Then $Tf \in E'$.
<details>
<summary>
Proof:
</summary>
Let $x_1 \neq x_2 \in X$ and $a_1, a_2$ be the corresponding optimal solutions when the state is given by $x_1, x_2$. Given $\theta \in (0,1)$, define $x_3, a_2$ as the convex combination, i.e.
\begin{align*}
x_3 &= \theta x_1 + (1-\theta) x_2 \\
a_3 &= \theta a_1 + (1-\theta) a_2
\end{align*}
We want to show that
$$
Tf(x_3) > \theta Tf(x_1) + (1-\theta) Tf(x_2).
$$
By definition,
\begin{align*}
Tf(x_1) &=u(x_1, a_1) + \beta f(x_1,a_1) \\
Tf(x_2) &=u(x_2, a_2) + \beta f(x_2,a_2)
\end{align*}
By the convex assumption of $\Gamma$,
$$ a_3 \in \Gamma(x_3). $$
By the concave assumption of $u$,
$$
u(x_3,a_3) > \theta u(x_1, a_1) + (1-\theta) u(x_2, a_2)
$$
By the concave assumption of $f$,
$$
\beta f(x_3,a_3) > \theta \beta f(x_1, a_1) + (1-\theta) \beta f(x_2, a_2)
$$
Combing the above results,
$$
Tf(x_3, a_3) > u(x_3,a_3) + \beta f(x_3,a_3) =\theta Tf(x_1) + (1-\theta) Tf(x_2)
$$
so $Tf$ is a strictly concave function.
</details>
### Corollary
Because $E$ is a complete metric space, by the contraction mapping theorem, there is a unique $v \in E$ such that $Tv =v$. By the above proof, $Tv$ is a strictly concave function, so the value function $v$ is strictly concave.
## Differentiability
We will prove a lemma first, then apply it to the value function in dynamic programming.
### Lemma
Let $X \subset \mathbb{R}^l$ be a convex set, let $v: X \to \mathbb{R}$ be concave, let $x_0 \in int X$, and let $D$ be a neighborhood of $x_0$. If there is a concave, differentiable function $w: D \to \mathbb{R}$, with $w(x_0) = v(x_0)$ and with $w(x) \le v(x)$ for all $x \in D$, then $v$ is differentiable at $x_0$, and
$$
Dv(x_0) = Dw(x_0).
$$
<details>
<summary>
Proof:
</summary>
By the concavity of $v$, any subgradient $p$ of $v$ at $x_0$ must satisfy
$$
p \cdot (x - x_0) \ge v(x) - v(x_0).
$$
Since $w(x_0) = v(x_0)$ and with $w(x) \le v(x)$ for all $x \in D$,
$$
p \cdot (x - x_0) \ge v(x) - v(x_0) \ge w(x) - w(x_0).
$$
Since $w$ is differentiable at $x_0$, $p$ is unique, and any concave function with a unique subgradient at an interior point $x_0$ is differentiable at $x_0$.
</details>
### Statement
Suppose
1. $x\in X \subset \mathbb{R}^l$ and $X$ is a convex set.
2. $u \in C$.
3. $\beta \in (0,1)$
4. $\Gamma$ is a non-empty, continuous (u.h.c and l.h.c.), and compact-valued correspondence.
6. $u(x,a)$ is concave in $(x,a)$.
7. $\Gamma(x)$ is convex in the following sense \begin{align*}\forall \theta \in [0,1], \forall a_1 \in \Gamma(x_1), \forall a_2 \in \Gamma(x_2)\\ \theta a_1 + (1-\theta) a_2 \in \Gamma(\theta x_1 + (1-\theta) x_2)\end{align*}
8. $u$ is continuously differentiable on the interior of $A$.
9. $x_0 \in int X$ and $\pi(x_0) \in int \Gamma(x_0)$
Then the value function $v$ is differentiable at $x_0$, and
$$
Dv(x_0) = Du(x_0, \pi(x_0)).
$$
<details>
<summary>
Proof:
</summary>
Since $\pi(x_0) \in int \Gamma(x_0)$ and $\Gamma$ is continuous. There exists a neighbor $D$ of $x_0$, such that $\pi(x_0) \in int \Gamma(x)$ if $x \in D$. Define $w: D \to \mathbb{R}$ as
$$
w(x) = u(x, \pi(x_0)) + \beta v(\pi(x_0)),
$$
where $v(\pi(x_0))$ is a constant and $w$ is concave and differentiable at $x_0$ by the continuously differentiablity and concavity of $u$. Moreover, $w(x_0) = v(x_0)$ and $w(x) \le v(x)$ for all $x \in D$ by definition of valuefunction. Hence, we can directly apply the above lemma. $v$ is differentiable at $x_0$ and
$$
Dv(x_0) = Dw(x_0) = Du(x_0, \pi(x_0)).
$$
</details>
## Reference
https://www.sas.upenn.edu/~fuentesa/Teaching/ECON%20704/DP-proofs.pdf
https://en.wikipedia.org/wiki/Maximum_theorem
https://en.wikipedia.org/wiki/Banach_fixed-point_theorem
https://en.wikipedia.org/wiki/Uniform_norm
Stokey, N. L. (1989). Recursive methods in economic dynamics. Harvard University Press.