# Lecture 8: Intro to ODE
$\newcommand{\R}{\mathbb{R}}$
$\renewcommand{\C}{\mathbb{C}}$
###### tags: `224a`
## First order Linear ODE
We begin by proving a basic existence and uniqueness result for first order initial value problems, which will serve as the foundation of such results for second and higher order problems. The notation $C^k[a,b]$ will denote $k$ times continuously differentiable functions on $[a,b]$ (this means they must be defined in a neighborhood of the endpoints), whose co-domain will be apparent from the context.
**Theorem.** For a finite closed interval $[a,b]$, the IVP:
$$ u'(x) = A(x)u(x) + b(x),$$
$$ u(x_0)=u_0\in\R^d,$$
where $u\in C^1[a,b]$ and $A, b\in C[a,b]$ are real vector/matrix/vector valued, has a unique solution $u$.
*Proof.* Observe that by the fundamental theorem of calculus, a $C^1$ function $u$ satisfies the equation iff it satisfies the integral equation
$$ u(x) = u_0 + \int_{x_0}^x (A(s)u(s)+b(s))ds.$$
Define the operator $T:C[a,b]\rightarrow C[a,b]$ by
$$ T\phi = u_0 + \int_{x_0}^x (A(s)\phi(s)+b(s))ds.$$
The fact that $T\phi$ is a continuous function follows from continuity of the integrand on $[a,b]$. Consider $C[a,b]$ to be a metric space with the sup norm $|f-g|=\sup_{[a,b]}|f(x)-g(x)|.$
Our goal is to show that $T$ has a unique fixed point, which will be a solution to our ODE. We will appeal to the Banach fixed point theorem: if $(M,d)$ is a metric space and $T:M\rightarrow M$ satisfies $d(T(a),T(b))\le \alpha d(a,b)$ for all $a,b$ for some $\alpha<1$, then $T$ has a unique fixed point.
$T$ itself is not a contraction, but it turns out a high enough power of it is. This method is called *Picard iteration*.
**Claim.** There exists $n$ such that $|T^nf-T^ng|\le .9|f-g|$ for every $f,g\in C[a,b]$.
*Proof.* To be completed.
## Higher order ODE
A general $n$th order ODE:
$$ \sum_{j=0}^n a_j(x) u^{(j)}(x) = b(x)$$
with $a_n(x)\neq 0$ can be written as a system of first order ODE in $n$ functions constrained by the $n-1$ equations: $$u_0=u, u_1=u_0', \ldots, u_k=u_{k-1}'=u^{(k)},\ldots, u_{n-1}=u_{n-2}'.$$ This allows one to write the ODE linearly as
$$ a_n(x)u_{n-1}'+'\sum_{j=0}^{n-1} a_j(x) u_j(x) = b(x).$$
$\newcommand{\uh}{\hat{u}}$
Viewing the variables as a single vector valued function $\uh:[a,b]\rightarrow \R^n$ and dividing by $a_n(x)$ the above system is of the form
$$\uh'(x) = A(x) \uh(x) + b(x)e_n,$$
and there is a bijection between solutions of this equation and solutions of the nth order ODE (the matrix $A(x)$ is just the https://en.wikipedia.org/wiki/Companion_matrix). Thus, by the theorem in the previous section it must also have a unique solution given initial data
$$u(x_0)=u_0,\ldots, u^{(n-1)}(x_0)=u_{n-1}.$$
## Dimension of The Solution Space
Let $L=\sum_{j=0}^n a_j d^j/dx^j$ be an nth order differential operator. For the *homogeneous* problem $Lu=0$, the set of solutions is a subspace of $C^n[a,b]$. For any point $x_0\in [a,b]$, consider the linear map
$$ E_{x_0}(u) = [u(x_0),u'(x_0),\ldots,u^{(n-1)}(x_0)]^T$$
into $\R^n$. By existence and uniqueness of solutions to the IVP at $x_0$, this map must be a bijection. Thus the space of solutions has dimension exactly $n$ for an $n$th order ODE.