# Lecture 8: Fredholm Alternative, Second Order BVP ###### tags: `224a 2020` ## The Fredholm Alternative $\newcommand{\tr}{\mathrm{tr}}$ $\newcommand{\ran}{\mathrm{ran}}$ In finite dimensional linear algebra, the solvability of the equation $Ax=b$ is nicely characterized by the fact that $$ \ran(A) = \ker(A^*)^\perp,$$ but this is no longer true in infinite dimensions, since the range may not be closed as we saw with the example of the Volterra operator. The Fredholm alternative states that the finite-dimensional relation can be recovered for operators of type $\lambda I - T$ where $T$ is compact and $\lambda\neq 0$. We will denote $\lambda I$ by $\lambda$ as is customary. **Theorem 5. (Fredholm Alternative)** If $T\in K(H)$ and $\lambda\in \mathbb{C}-\{0\}$ then for $A=\lambda - T$: 1. $\ran(A)=\overline{\ran(A)} = \ker(A^*)^\perp.$ 2. $\dim(\ker(A))=\dim(\ker(A^*))<\infty.$ This theorem has the following important conceptual consequences: 1. $\lambda$ is an eigenvalue of $T$ iff $\overline{\lambda}$ is an eigenvalue of $T^*$ (since these correspond to $\ker(A)\neq \{0\}$ and $\ker(A^*)\neq\{0\}$. 2. $(\lambda-T)x=b$ has a solution for every $b$ iff $\lambda$ is not an eigenvalue of $T$. Otherwise, the set of $b$ for which a solution exists is precisely $\ker(A^*)^\perp$. *Proof of (1).* We showed that for every $\lambda\neq 0$, there is a constant $b$ such that $$ \|(\lambda-T)x\|\ge c\|x\|$$ for all $x\in H$ (see the class notes for details). This implies that $x_n$ is Cauchy whenever $(\lambda-T)x_n$ is Cauchy, yielding the result. $\square$ *Proof of (2).* Given $\lambda\neq 0$, decompose $T=T_0+T_1$ where $\|T_0\|<|\lambda|$ and $T_1$ is finite rank (this is possible because $T$ is a norm limit of finite rank operators). Observe that since the power series $$(1-x)^{-1}=\sum_{k\ge 0}x^k=:S(x)$$ is absolutely geometrically convergent in $\{|x|<1\}$ and satisfies $(1-x)S(x)=1$ for such $x$, we have that $$S(T_0/\lambda)=\sum_{k\ge 0} (T_0/\lambda)^k$$ is convergent in norm (by applying the triangle inequality to $S$ minus its partial sums). Thus, $(1/\lambda)S(T_0/\lambda)=(\lambda-T_0)^{-1}\in L(H)$. We now have $$ (\lambda-T_0)^{-1} (\lambda-T) = I - (\lambda-T_0)^{-1}T_1=:I-T_2,$$ where $T_2$ has finite rank. As left or right multiplying by an invertible transformation doesn't change the dimension of the kernel, $$\dim(\ker(\lambda-T))=\dim(\ker(I-T_2))$$ and $$\dim(\ker(\lambda^*-T^*))=\dim(\ker(I-T_2^*))$$ the latter can be shown to be equal by elementary linear algebra (homework) since $T_2$ is finite rank.$\square$ $\newcommand{\R}{\mathbb{R}}$ $\renewcommand{\C}{\mathbb{C}}$ ## First order Linear ODE We begin by proving a basic existence and uniqueness result for first order initial value problems, which will serve as the foundation of such results for second and higher order problems. The notation $C^k[a,b]$ will denote $k$ times continuously differentiable functions on $[a,b]$ (this means they must be defined in a neighborhood of the endpoints), whose co-domain will be apparent from the context. **Theorem.** For a finite closed interval $[a,b]$, the IVP: $$ u'(x) = A(x)u(x) + b(x),$$ $$ u(x_0)=u_0\in\R^d,$$ where $u\in C^1[a,b]$ and $A, b\in C[a,b]$ are real vector/matrix/vector valued, has a unique solution $u$. *Proof.* Observe that by the fundamental theorem of calculus, a $C^1$ function $u$ satisfies the equation iff it satisfies the integral equation $$ u(x) = u_0 + \int_{x_0}^x (A(s)u(s)+b(s))ds.$$ Define the operator $T:C[a,b]\rightarrow C[a,b]$ by $$ T\phi = u_0 + \int_{x_0}^x (A(s)\phi(s)+b(s))ds.$$ The fact that $T\phi$ is a continuous function follows from continuity of the integrand on $[a,b]$. Consider $C[a,b]$ to be a metric space with the sup norm $|f-g|=\sup_{[a,b]}|f(x)-g(x)|.$ Our goal is to show that $T$ has a unique fixed point, which will be a solution to our ODE. We will appeal to the Banach fixed point theorem: if $(M,d)$ is a metric space and $T:M\rightarrow M$ satisfies $d(T(a),T(b))\le \alpha d(a,b)$ for all $a,b$ for some $\alpha<1$, then $T$ has a unique fixed point. $T$ itself is not a contraction, but it turns out a high enough power of it is. This method is called *Picard iteration*. **Claim.** There exists $n$ such that $|T^nf-T^ng|\le .9|f-g|$ for every $f,g\in C[a,b]$. *Proof.* See the class notes, as well as the beginning of Lecture 10, which fills in a gap. ## Dimension of The Solution Space Let $L=\sum_{j=0}^n a_j d^j/dx^j$ be an nth order differential operator. For the *homogeneous* problem $Lu=0$, the set of solutions is a subspace of $C^n[a,b]$. For any point $x_0\in [a,b]$, consider the linear map $$ E_{x_0}(u) = [u(x_0),u'(x_0),\ldots,u^{(n-1)}(x_0)]^T$$ into $\R^n$. By existence and uniqueness of solutions to the IVP at $x_0$, this map must be a bijection. Thus the space of solutions has dimension exactly $n$ for an $n$th order ODE. ## Extra: Higher order ODE A general $n$th order ODE: $$ \sum_{j=0}^n a_j(x) u^{(j)}(x) = b(x)$$ with $a_n(x)\neq 0$ can be written as a system of first order ODE in $n$ functions constrained by the $n-1$ equations: $$u_0=u, u_1=u_0', \ldots, u_k=u_{k-1}'=u^{(k)},\ldots, u_{n-1}=u_{n-2}'.$$ This allows one to write the ODE linearly as $$ a_n(x)u_{n-1}'+'\sum_{j=0}^{n-1} a_j(x) u_j(x) = b(x).$$ $\newcommand{\uh}{\hat{u}}$ Viewing the variables as a single vector valued function $\uh:[a,b]\rightarrow \R^n$ and dividing by $a_n(x)$ the above system is of the form $$\uh'(x) = A(x) \uh(x) + b(x)e_n,$$ and there is a bijection between solutions of this equation and solutions of the nth order ODE (the matrix $A(x)$ is just the https://en.wikipedia.org/wiki/Companion_matrix). Thus, by the theorem in the previous section it must also have a unique solution given initial data $$u(x_0)=u_0,\ldots, u^{(n-1)}(x_0)=u_{n-1}.$$