# Contraction Mapping :::success **Theorem (Contraction Mapping Principle in [MH]).** Let $T: C_b\left(A, \mathbb{R}^m\right) \rightarrow$ $C_b\left(A, \mathbb{R}^m\right)$ be a given mapping such that there is a constant $\lambda, 0 \leqslant$ $\lambda<1$ with \begin{equation*} \|T(f)-T(g)\| \leqslant \lambda\|f-g\| \end{equation*} for all $f, g \in C_b\left(A, \mathbb{R}^m\right)$. Then $T$ (is continuous and) has a unique fixed point; that is, there exists a unique point $f_0 \in C_b\left(A, \mathbb{R}^m\right)$ such that $T\left(f_0\right)=f_0$. ::: Actually, we can replace $C_b\left(A, \mathbb{R}^m\right)$ by any complete metric space. :::success **Theorem (Contraction mapping in [DD] or [R]).** If $T: X \rightarrow X$ is a contraction mapping on a complete metric space $(X, d)$, then there is exactly one solution $x \in X$ of $T(x)=x$. ::: Recall theorem in p.113 [MH], $C_b\left(A, \mathbb{R}^m\right)$ is a Banach space. Thus, $C_b\left(A, \mathbb{R}^m\right)$ is a complete metric space. Now, we take a closer look at contraction mapping theorem. :::danger **Converse.** $X=\{(x,y):y=\sin(1/x), x\in(0,1]\}$ is non-closed in $\mathbb{R}^2$, but for all contraction mapping has unique fixed point. Please see [Elekes (2011)](https://arxiv.org/abs/1108.5920) for more detail. ::: :::danger **Weak.** Consider $f:[0, \infty) \rightarrow[0, \infty)$ given by \begin{equation*} f(x)=\sqrt{1+x^2} . \end{equation*} Since \begin{equation*} f^{\prime}(\xi)=\frac{\xi}{\sqrt{1+\xi^2}}<1 \text { for each } \xi \in[0, \infty), \end{equation*} by the mean value theorem \begin{equation*} |f(x)-f(y)|=f^{\prime}(\xi)|x-y|<|x-y| \text { for every } x, y \in[0, \infty) . \end{equation*} But $f$ clearly has no fixed points. ::: ## Application of Contraction Mapping ### Solve linear equation (Jacobi & Gaussian Seidel Method). Consider the linear system $Ax=b$ where $A\in\mathbb{R}^{m\times m}$ and $x,b\in\mathbb{R}^m$. We write $A=D-L-U$ where $D$ is diagonal, $L$ is lower triangular, and $U$ is upper triangular. Then, $b=Ax=(D-L-U)x=Dx-(L+U)x$. Thus, it's sufficient to convert to the problem of finding fixed point. \begin{equation*} x=D^{-1}(L+U) x+D^{-1} b :=T(x). \end{equation*} On the other hands, an $n \times n$ matrix $A$ is said to be **diagonally dominant** if for each row the sum of the absolute values of the off-diagonal terms is less than the absolute value of the diagonal term. If $A$ is diagonally dominant, show that \begin{equation*} \left|a_{i i}\right|> \sum_{j \neq i}\left|a_{i j}\right| \quad \text { for all } i\,. \end{equation*} :::success **Theorem.** Show the Jacobi scheme $x_{n+1}=D^{-1}(L+U) x_n+D^{-1} b$ converges to a solution of $A x=b$ ::: Proof. Define $$K=\max \left\{\frac{1}{\left|a_{i i}\right|} \sum_{\substack{j=1 \\ j \neq i}}^m\left|a_{i j}\right|: i=1,2, \ldots, m\right\}$$ and by diagonally dominant, we know $K<1$. Focus on $i$-component of $T(x)-T(y)$ \begin{equation*} \begin{aligned} (T(x)-T(y))_i=& \left|\frac{1}{a_{i i}}\left(b_i-\sum_j a_{i j} x_j+a_{i i} x_i\right)-\frac{1}{a_{i i}}\left(b_i-\sum_j a_{i j} y_j+a_{i i} y_i\right)\right| \\ & \quad=\frac{1}{\left|a_{i i}\right|}\left|\sum_{\substack{j=1 \\ j \neq i}}^m a_{i j}\left(x_j-y_j\right)\right| \leqslant \frac{1}{\left|a_{i i}\right|} \sum_{\substack{j=1 \\ j \neq i}}^m\left|a_{i j}\right| \|x-y\| \leqslant K \|x-y\| . \end{aligned} \end{equation*} :::success **Theorem.** The Gauss-Seidel scheme $x_{n+1}=(D-L)^{-1}U x_n+(D-L)^{-1} b$ converges to a solution of $A x=b$ ::: Proof. Prove it by yourself. ### Volterra integral equation Consider Volterra integral equation \begin{equation} f(x)=a+\int_0^x k(x, y) f(y) d y \end{equation} :::info **Concrete example (p.117 in [MH]).** Show that the method of successive approximations applied to $f(x)=1+\int_0^x f(y) d y$ leads to the usual formula for $e^x$. \begin{equation*} \begin{aligned} T(0) & =1 ; \\ T(T(0)) & =1+\int_0^x d y=1+x \\ T\left(T^2(0)\right) & =1+\int_0^x(1+y) d y=1+x+\frac{x^2}{2} ; \end{aligned} \end{equation*} Hence, this sequence converges $e^x$. ::: :::success **Theorem.** If $\sup _{x \in[0, r]} \int_0^x|k(x, y)| d y=\lambda<1$ then \begin{equation} f(x)=a+\int_0^x k(x, y) f(y) d y \end{equation} has a unique solution on $[0, r]$. ::: Proof. See the blackboard. :::info **Example (Ex. 5.6.5 in [MH]).** Convert $d y / d x=3 x y, y(0)=1$ to an integral equation and set up an iteration scheme to solve it. Let $y(x)=a+\int^x_03sy(s)ds$. By $y(0)=1$, we have $a=1$. Let initial function $y=0$. \begin{equation*} \begin{aligned} T(0) & =1 ; \\ T(T(0)) & =1+\int^x_03sds=1+\frac{3}{2}x^2 \\ T\left(T^2(0)\right) & =1+\int_0^x3s(1+\frac{3}{2}s^2) d s=1+\frac{3}{2}x^2+\frac{1}{2}\left(\frac{3}{2}x^2\right)^2 ; \end{aligned} \end{equation*} Hence, this sequence converges $e^{\frac{3}{2}x^2}$. ::: ### More on integral equations :::success **Theorem (Ex. 5.26 in [MH]).** Let $k(x, y)$ be a continuous real-valued function on the square $U=\{(x, y) \mid 0 \leqslant x \leqslant 1$, $0 \leqslant y \leqslant 1\}$ and assume $|k(x, y)|<1$ for each $(x, y) \in U$. Let $A:[0,1] \rightarrow \mathbb{R}$ be continuous. Prove that there is a unique continuous real-valued function $f(x)$ on $[0,1]$ such that \begin{equation*} f(x)=A(x)+\int_0^1 k(x, y) f(y) d y . \end{equation*} ::: Proof. Prove it by yourself. :::success **Proposition.** Let $f:[0,1] \rightarrow \mathbb{R}$ be a continuous function. The unique solution $v$ of the boundary value problem \begin{equation*} \begin{aligned} & -v^{\prime \prime}=f, \\ & v(0)=0, \quad v(1)=0, \end{aligned} \end{equation*} is given by \begin{equation*} v(x)=\int_0^1 g(x, y) f(y) d y \end{equation*} where \begin{equation*} g(x, y)= \begin{cases}x(1-y) & \text { if } 0 \leq x \leq y \leq 1 \\ y(1-x) & \text { if } 0 \leq y \leq x \leq 1\end{cases} \end{equation*} ::: Proof. Skip! Let's summarize integral equations which we mentioned. :::warning 1. Fredholm Integral Equations \begin{equation*} u(x)=f(x)+\lambda \int_a^b K(x, t) u(t) d t \end{equation*} 2. Volterra Integral Equations \begin{equation*} u(x)=f(x)+\lambda \int_0^x K(x, t) u(t) d t . \end{equation*} ::: :::spoiler Logistic equation (chaos) \begin{equation*} x_{n+1}=r x_n\left(1-x_n\right) \end{equation*} [Demo](https://www.math.colostate.edu/~shriner/sec-3-3.html), [Wiki](https://en.wikipedia.org/wiki/Logistic_map) ::: ## Existence & Uniqueness of ODE ### Intuiation Focus on IVP for first-order system ODE \begin{align} &x'(t)=f(x(t), t)\\ &x(t_0)=x_0 \tag{$\clubsuit$} \end{align} :::info **Concrete example. (Ex. 7.5.3 in [MH])** Consider the IVP for first-order system ODE \begin{align} &y'(x)=\sqrt{|y|}\\ &y(0)=0 \end{align} Solve it directly, $\frac{d y}{\sqrt{y}}=d x$, and get $y=\frac{1}{4}(x+c)^2$. By IC, we have \begin{equation*} y=\frac{1}{4} x^2 \end{equation*} for $x\geq 0$. However, \begin{equation*} \left\{\begin{array}{ll} 0 & \text { for } 0 \leq x \leq \lambda \\ \frac{1}{4}(x-\lambda)^2 & \text { for } x \geq \lambda \end{array}\right. \end{equation*} is also a solution for all $\lambda\geq 0$. ::: :::warning Since $\sqrt{|y|}$ is not *good* enough, the solution is not unique. If $f(x, y(x))$ is continuous, then by **Peano existence theorem**, there exist at least one solution for all IC. ::: Back to $(\clubsuit)$, define a operator \begin{equation*} \Phi(x)(t)=x_0+\int_{t_0}^t f(x(s), s) d s\,. \end{equation*} If $x(t)$ is a solution to $(\clubsuit)$, then $x(t)$ is fixed point of $\Phi$. :::warning The problem of finding a solution to $(\clubsuit)$ transforms to a problem of finding a fixed-point of $\Phi$. ::: :::info **Concrete example. (Ex. 7.5.5 in [MH])** Find a function $x(t)$ satisfying $x^{\prime}(t)=x(t)$ with initial value $x(0)=1$. Define $$\Phi(x)(t)=1+\int_0^t x(s) d s, x_0(t)=1$$. Then \begin{equation*} \begin{aligned} & x_1(t)=1+\int_0^t x_0(s) d s=1+t \\ &x_2(t)=1+\int_0^t x_1(s) d s=1+t+\frac{t^2}{2} \\ & x_3(t)=1+\int_0^t x_2(s) d s=1+t+\frac{t^2}{2}+\frac{t^3}{3 \cdot 2} \end{aligned} \end{equation*} By induction, we have $x_k(t)=1+t+\frac{t^2}{2}+\cdots+\frac{t^k}{k !}$ which converges to $$x(t)=\sum_{k=0}^{\infty} \frac{t^k}{k !}=e^t\,.$$ ::: ### Existence and uniqueness of IVP for ODE :::success **Theorem (locally existence for ODEs).** Let $I=[t_0-T, t_0+T]$. Let $f:I \times \bar{B}_R\left(x_0\right) \rightarrow \mathbb{R}^n$ be a given continuous mapping. Let $C=\sup \left\{\|f(t, x)\| \mid(t, x) \in I \times \bar{B}_R\left(x_0\right)\right\}$. Suppose there is a constant $K$ such that \begin{equation*} \|f(t, x)-f(t, y)\| \leqslant K\|x-y\| \end{equation*} for all $t \in I, x, y \in\bar{B}_R\left(x_0\right)$. Let $\delta<\min \{T, R / C, 1 / K\}$. Then there is a unique continuously differentiable map $x:[t_0, t_0+\delta] \rightarrow$ $\mathbb{R}^n$ and $x\in \bar{B}_R(x_0)$ such that ($\clubsuit$) holds. ::: :::warning Note that $f$ only need to be _locally_ Lipschitz continuous. Moreover, $\|x-y\|$ is the *Euclidean* norm on $\mathbb{R}^n$ instead of $C^0$ norm. ::: Proof. See the blackboard. ### Example :::info **Example (p.221 in [MH] and p.310 in [DD])** Consider IVP for ODE $x'(t)=x^2$, $x(0)=1$. Let $T, R$ be undetermined. Now \begin{equation*} \begin{aligned} C & =\sup \{|f(t, x)| \mid-T \leqslant t \leqslant T,-R \leqslant x-1 \leqslant R\} \\ & =\sup \left\{x^2 \mid-R \leqslant x-1 \leqslant R\right\} \\ & =(R+1)^2\,. \end{aligned} \end{equation*} Thus, $R / C=R /(R+1)^2$. Also, $\partial f / \partial x=2 x$, so \begin{equation*} \begin{aligned} K & =\sup \{2|x| \mid-R \leqslant x-1 \leqslant R\} \\ & =2(r+1) . \end{aligned} \end{equation*} Since $a$ is not involved we can just choose $T$ large enough so that it does not interfere, say, $T=100$. Then, by the theorem, we must choose \begin{equation*} \delta<\min \left\{\frac{R}{(R+1)^2}, \frac{1}{2(R+1)}\right\} . \end{equation*} This will work for any choice of $r$. For example, if we let $R=1$ we get a time of existence $\delta<1 / 4$, _i.e._ $[0, 1/4]$. On the other hand, solve by separation of variable $\frac{x^{\prime}}{x^2}=1$, and we obtain, \begin{equation*} x(t)=\frac{1}{1-t} \end{equation*} on $[0,1)$. Therefore, we can re-consider the new IVP for ODE \begin{equation*} x^{\prime}=x^2, \quad x(a)=\frac{1}{1-a}, \end{equation*} for $a=1/4$. Again, let $a=1/4$, \begin{equation*} \begin{aligned} C & =\sup \{|f(t, x)| \mid-T \leqslant t \leqslant T,-R \leqslant x-\frac{1}{1-a} \leqslant R\} \\ & =\sup \left\{x^2 \mid-R \leqslant x-\frac{1}{1-a} \leqslant R\right\} \\ & =(R+\frac{1}{1-a})^2\,. \end{aligned} \end{equation*} To maximize $\delta$, let $R=\frac{1}{1-a}$, which yields $\delta=\frac{1-a}{4}$. Hence, $a_2=a_1+(1-a_1)/4=7/16$, _i.e._ $[0,7/16]$. Continues this process, we can get \begin{equation*} a_{n+1}=a_n+\frac{1-a_n}{4}=\frac{3 a_n+1}{4} \text {. } \end{equation*} and $a_{n}\to 1$ as $n\to \infty$. ::: :::info **Example (Cont'd Ex. 7.5.3 in [MH] and p.312 in [DD]).** Finally, we show $f(x,y)=\sqrt{|y|}$ is not Lipschitz for $y_1=0$, $y_2=h$ and $x\in\mathbb{R}$. \begin{equation*} \lim _{h \rightarrow 0} \frac{|f(0, h)-f(0,0)|}{|h|}=\lim _{h \rightarrow 0} \frac{1}{|h|^{1 / 2}}=+\infty . \end{equation*} :::