---
title: Ch5-4
tags: Linear algebra
GA: G-77TT93X4N1
---
# Chapter 5 extra note 4
## Selected lecture notes
> orthogonal complement
> direct sum
> linear functional
> Riesz Representation Theorem
> adjoint linear transformation
### Orthogonal complement
:::info
**Definition:**
For a vector subspace $E$, its **orthogonal complement** $E^{\perp}=\{{\bf v}\in V | {\bf v}\perp E\}$.
:::
**Proposition:**
$E^{\perp}$ is a vector subspace.
* Proof:
> Let ${\bf v}_1, {\bf v}_2\in E^\perp$, fix $\alpha, \beta\in F$.
>
> claim: $\alpha_1{\bf v}_1+\alpha_2{\bf v}_2\in E^\perp$:
> > Given ${\bf u}\in E$, since ${\bf v}_1, {\bf v}_2\in E^\perp$, we have
> $$
> \langle{\bf v}_1, {\bf u}\rangle = \langle{\bf v}_2, {\bf u}\rangle = 0.
> $$
> So
> $$
> 0 = \alpha_1\langle{\bf v}_1, {\bf u}\rangle + \alpha_2 \langle{\bf v}_2, {\bf u}\rangle = \langle\alpha_1{\bf v}_1+\alpha_2{\bf v}_2, {\bf u}\rangle,
> $$
> that is, $(\alpha_1{\bf v}_1+\alpha_2{\bf v}_2) \perp {\bf u}$.
> Therefore, $(\alpha_1{\bf v}_1+\alpha_2{\bf v}_2) \perp {\bf u}$, $\forall {\bf u}\in E$.
**Proposition:**
$E \cap E^\perp = \{{\bf 0}\}$.
* Proof:
> At first, we show that $E \cap E^\perp$ is not an empty set, and it contains at least one element which is the zero vector, ${\bf 0}$.
> > Since $E$ and $E^\perp$ are both vector subspaces, so ${\bf 0}\in E$ and ${\bf 0}\in E^\perp$, and we have ${\bf 0}\in (E \cap E^\perp)$.
>
> Secondly, we show that ${\bf 0}$ is the only element.
> > Let ${\bf v}\in E \cap E^\perp$. We have ${\bf v}\in E$ and ${\bf v}\in E^\perp$, so
> $$
> 0=\langle{\bf v}, {\bf v}\rangle=\|{\bf v}\|^2.
> $$
> Hence ${\bf v}={\bf 0}$.
**Proposition:**
Let $E\subset V$, for any ${\bf v}\in V$, there exists an uniqe decomposition
$$
{\bf v} = {\bf v}_1 + {\bf v}_2, \quad {\bf v}_1\in E, \quad {\bf v}_2\in E^\perp.
$$
* Proof:
> (Existence)
> Given ${\bf v}\in V$, we define ${\bf v}_1=P_E{\bf v}\in E$, and ${\bf v}_2={\bf v}-{\bf v}_1$.
> By the definition of orthogonal projection we have ${\bf v}_2\in E^\perp$.
>
>
> (Uniqueness)
> Assume
> $$
> {\bf v} = {\bf v}_1 + {\bf v}_2 = {\bf w}_1 + {\bf w}_2,
> $$
> where ${\bf v}_1, {\bf w}_1\in E$ and ${\bf v}_2, {\bf w}_2\in E^\perp$.
> We define ${\bf u} = {\bf v}_1 - {\bf w}_1$. Since ${\bf v}_1, {\bf w}_1\in E$ we must have ${\bf u}\in E$. In addition, we find
> $$
> {\bf u} = {\bf v}_1 - {\bf w}_1 = ({\bf v} - {\bf v}_2) - ({\bf v} - {\bf w}_2) = {\bf w}_2 - {\bf v}_2.
> $$
> Since ${\bf v}_2, {\bf w}_2\in E^\perp$ we must have ${\bf u}\in E^\perp$. As a result, ${\bf u}\in (E\cap E^\perp) = {\bf 0}$. We must have ${\bf u}={\bf 0}$, and ${\bf v}_1={\bf w}_1$, ${\bf v}_2={\bf w}_2$.
**Proposition:**
Let $E\subset V$, for any ${\bf v}\in V$,
$$
P_{E^\perp}{\bf v} = {\bf v}-P_E{\bf v}.
$$
* Proof:
> Let ${\bf v}_2={\bf v}-P_E{\bf v}$, then ${\bf v}_2\perp E$, so we have
> (1) ${\bf v}_2\in E^\perp$.
> Besides, ${\bf v} - {\bf v}_2 = P_E{\bf v}\perp E^\perp$. So we obtain
> (2) $({\bf v} - {\bf v}_2) \perp E^\perp$.
> By (1), (2) and the definition of orthogonal projection, we have
> $$
> P_{E^\perp}{\bf v} = {\bf v}-P_E{\bf v}.
> $$
:::info
**Definition:**
Let $W_1$ and $W_2$ be vector subspaces, we define $W_1+W_2$ be a set satisfies
$$
W_1+W_2 = \{{\bf v} | {\bf v} = {\bf w}_1 + {\bf w}_2, \quad {\bf w}_1\in W_1, {\bf w}_2\in W_2\}.
$$
:::
**Proposition:**
$W_1+W_2$ is a vector subspace.
**Proposition:**
A vector space $V$ is called a **direct sum** of subspaces $W_1$ and $W_2$ if $W_1\cap W_2=\{{\bf 0}\}$ and $V=W_1+W_2$. We denote it by $V=W_1\oplus W_2$.
**Proposition:**
$V = E\oplus E^\perp$.
**Proposition:**
$\left(E^\perp\right)^\perp = E$.
* Proof:
> Let ${\bf u}\in E$, then $\langle{\bf u}, {\bf v}\rangle=0$, for all ${\bf v}\in E^\perp$.
> Since $\langle{\bf u}, {\bf v}\rangle=0$, for all ${\bf v}\in E^\perp$, so ${\bf u}\in \left(E^\perp\right)^\perp$.
> Therefore, $E\subseteq \left(E^\perp\right)^\perp$.
>
> Let ${\bf u}\in \left(E^\perp\right)^\perp\subseteq V$, then ${\bf u} = {\bf v}+{\bf w}$ where ${\bf v}\in E$ and ${\bf w}\in E^\perp$.
> Since ${\bf u}\in \left(E^\perp\right)^\perp$ and ${\bf v}\in E \subseteq \left(E^\perp\right)^\perp$, so ${\bf w}={\bf u}-{\bf v}\in \left(E^\perp\right)^\perp$.
> But we also have ${\bf w}\in E^\perp$.
> As a result we must have ${\bf w}={\bf 0}$, and ${\bf u}={\bf v}\in E$.
> Therefore, $\left(E^\perp\right)^\perp\subseteq E$.
### Linear functional
:::info
**Definition:**
A **linear functional** on a vector space $V$ is a linear map from $V$ to $\mathbb{R}$. That is, a linear functional belongs to $\mathcal{L}(V, \mathbb{R})$.
:::
#### Examples of linear functionals
* Example 1:
$$
\phi_1:\mathbb{P}_2\to\mathbb{R}, \quad \phi(p)=p(1).
$$
* Example 2:
$$
\phi_1:\mathbb{P}_2\to\mathbb{R}, \quad \phi(p)=\int^1_0 p(x)\,dx.
$$
* Example 3:
$$
\phi_1:\mathbb{P}_2\to\mathbb{R}, \quad \phi(p)=\int^1_0 p(x)\sin(x)\,dx.
$$
**Riesz Representation Theorem:**
Suppose $V$ is a finite dimentional inner product space and $\phi:V\to\mathbb{R}$ is a linear functional, then there exists an unique ${\bf u}\in V$ such that
$$
\phi({\bf v}) = \langle{\bf v}, {\bf u}\rangle, \quad \forall{\bf v}\in V.
$$
* Proof:
> (existence)
> Let $\{e_1, \cdots, e_n\}\subset V$ be an orthonormal basis of $V$.
>
> Given ${\bf v}\in V$, we then have
> $$
> \begin{align}
> {\bf v} &= \alpha_1 e_1 + \cdots + \alpha_n e_n\\
> &= \langle{\bf v}, e_1\rangle e_1 + \cdots + \langle{\bf v}, e_n\rangle e_n.
> \end{align}
> $$
> Since $\phi$ is linear,
> $$
> \begin{align}
> \phi({\bf v}) &= \phi\left(\langle{\bf v}, e_1\rangle e_1 + \cdots + \langle{\bf v}, e_n\rangle e_n\right)\\
> &=\langle{\bf v}, e_1\rangle\phi(e_1) + \cdots +\langle{\bf v}, e_n\rangle\phi(e_n)\\
> &=\langle{\bf v}, \phi(e_1)e_1\rangle + \cdots +\langle{\bf v}, \phi(e_n)e_n\rangle\\
> &=\langle{\bf v}, \phi(e_1)e_1+\cdots+\phi(e_n)e_n\rangle.
> \end{align}
> $$
> Let us define
> $$
> {\bf u} = \phi(e_1)e_1+\cdots+\phi(e_n)e_n,
> $$
> it is then clear that $\phi(\bf v)=\langle{\bf v}, {\bf u}\rangle$.
>
> (uniqueness)
> Suppose $\exists {\bf u}_1, {\bf u}_2\in V$ such that
> $$
> \phi({\bf v})=\langle {\bf v}, {\bf u}_1\rangle = \langle {\bf v}, {\bf u}_2\rangle, \quad \forall {\bf v}\in V.
> $$
> Then we have
> $$
> 0 = \langle {\bf v}, {\bf u}_1-{\bf u}_2\rangle, \quad \forall {\bf v}\in V.
> $$
> Therefore, we must have ${\bf u}_1-{\bf u}_2={\bf 0}$ and ${\bf u}_1={\bf u}_2$.
### Adjoint linear transformation
:::info
**Definition:**
Let $V$ and $W$ be inner product space and $T:V\to W$ be a linear transformation. The map $T^*:W\to V$ is called an adjoint of $T$ if
$$
\langle T{\bf v}, {\bf w}\rangle = \langle {\bf v}, T^*{\bf w}\rangle, \quad \forall {\bf v}\in V, {\bf w}\in W.
$$
:::
**Remark:**
Here we show the existence of the map $T^*$.
Given ${\bf w}\in W$, we define $\phi({\bf v})=\langle T{\bf v}, {\bf w}\rangle$. It is clear that $\phi$ is a linear functional. Hence, based on the Riese representation Theorem, there exists ${\bf u}\in V$ such that
$$
\phi({\bf v})=\langle {\bf v}, {\bf u}\rangle.
$$
We then define $T^*{\bf w}={\bf u}$ and we have
$$
\langle T{\bf v}, {\bf w}\rangle = \phi({\bf v})=\langle {\bf v}, {\bf u}\rangle = \langle {\bf v}, T^*{\bf w}\rangle.
$$
So the existence of $T^*$ is assured.
**Proposition**
$T^*$ is an adjoint of $T$ if
$$
\langle {\bf w}, T{\bf v}\rangle = \langle T^*{\bf w}, {\bf v}\rangle, \quad \forall {\bf v}\in V, {\bf w}\in W.
$$
**Proposition**
$T^*$ is unique.
* Proof:
> Let $S:W\to V$ be a map that also satisfies
> $$
> \langle T{\bf v}, {\bf w}\rangle = \langle {\bf v}, S{\bf w}\rangle, \quad \forall {\bf v}\in V, {\bf w}\in W.
> $$
> Then
> $$
> \langle {\bf v}, S{\bf w}\rangle = \langle T{\bf v}, {\bf w}\rangle = \langle {\bf v}, T^*{\bf w}\rangle, \quad \forall {\bf v}\in V, {\bf w}\in W.
> $$
> So
> $$
> \langle {\bf v}, (S-T^*){\bf w}\rangle=0, \quad \forall {\bf v}\in V, {\bf w}\in W.
> $$
> Hence, $S = T^*$.
**Proposition**
$T^*$ is linear.
* Proof:
> Fix ${\bf w}_1, {\bf w}_2\in W$, $\alpha_1, \alpha_2\in F$, given ${\bf v}\in V$,
> $$
> \begin{align}
> \langle {\bf v}, T^*(\alpha_1{\bf w}_1 + \alpha_2{\bf w}_2)\rangle &= \langle T({\bf v}), \alpha_1{\bf w}_1 + \alpha_2{\bf w}_2\rangle\\
> &=\overline{\alpha_1}\langle T({\bf v}), {\bf w}_1\rangle+\overline{\alpha_2}\langle T({\bf v}), {\bf w}_2\rangle\\
> &=\overline{\alpha_1}\langle {\bf v}, T^*({\bf w}_1)\rangle+\overline{\alpha_2}\langle {\bf v}, T^*({\bf w}_2)\rangle\\
> &=\langle {\bf v}, \alpha_1T^*({\bf w}_1) + \alpha_2T^*({\bf w}_2)\rangle.
> \end{align}
> $$
> Therefore
> $$
> \langle {\bf v}, T^*(\alpha_1{\bf w}_1 + \alpha_2{\bf w}_2)\rangle = \langle {\bf v}, \alpha_1T^*({\bf w}_1) + \alpha_2T^*({\bf w}_2)\rangle, \quad \forall{\bf v}\in V.
> $$
> That leads to
> $$
> T^*(\alpha_1{\bf w}_1 + \alpha_2{\bf w}_2) = \alpha_1T^*({\bf w}_1) + \alpha_2T^*({\bf w}_2).
> $$