---
title: Ch1-3
tags: Linear algebra
GA: G-77TT93X4N1
---
# Chapter 1 extra note 3
> Linear transformation
> Matrix representation of a linear transformation
> Range
> kernel
## Youtube
* [Abstract vector spaces](https://youtu.be/TgKwz5Ikpc8?si=JiDyRF_NqtXKgbAI)
* [Linear transformation](https://youtu.be/UF59Mok4fsQ?si=gA_4bryalq7ddiRL)
* [Linear Transformations on Vector Spaces](https://youtu.be/is1cg5yhdds?si=U-ENRMSly0W5LmVF)
* [2D Convolution Neural Network Animation](https://youtu.be/CXOGvCMLrkA?si=q_tbp7qsQ9hIut8c)
## Selected lecture notes:
### Linear transformation
:::info
#### Definition
Let $V$, $W$ be vector spaces over the same field $F$. The map $T: V\to W$ is a linear transformation if
1. $T(u + v) = T(u) + T(v), \quad \forall u, v\in V$.
2. $T(\alpha v) = \alpha T(v), \quad \forall \alpha\in F, v\in V$.
Or equivalently,
$$
T(\alpha u + \beta v) = \alpha T(u) + \beta T(v), \quad \forall \alpha, \beta\in F, u, v\in V.
$$
:::
#### Lemma:
Let $T:V\to W$ be a linear transformation, then $T({\bf 0})={\bf 0}$.
* Proof:
> Let ${\bf u}\in V$, then
> $$
> T({\bf 0}) = T(0\cdot {\bf u}) = 0\cdot T({\bf u}) = {\bf 0}.
> $$
#### Examples of linear transformation
* $T: \mathbb{P}_2(x)\to \mathbb{P}_1(x)$ with $T(p) = \frac{d}{dx} p(x)$.
* $T: \mathbb{P}_n(x)\to \mathbb{P}_{n+1}(x)$ with $T(p) = \int^x_0 p(s)\,\text{d}s$.
* $T: \mathbb{P}_3(x)\to \mathbb{P}_3(x)$ with $T(p) = \frac{d}{dx} p(x)$.
* a) Let $\beta = \{1, x, x^2, x^3\}$ be a basis of $\mathbb{P}_3(x)$. For any ${\bf v}\in \mathbb{P}_3(x)$, we have
$$
{\bf v} = \alpha_0\cdot 1 + \alpha_1\cdot x + \alpha_2\cdot x^2 + \alpha_3\cdot x^3.
$$
Also, we have
$$
\tag{2}
T({\bf v}) = \alpha_0\cdot T(1) + \alpha_1\cdot T(x) + \alpha_2\cdot T(x^2) + \alpha_3\cdot T(x^3),
$$
and notice that
$$
\begin{align}
T(1) &= 0 = 0\cdot 1 + 0\cdot x + 0\cdot x^2 + 0\cdot x^3 = \begin{bmatrix}1 & x & x^2 & x^3\end{bmatrix}\begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \end{bmatrix},\\
T(x) &= 1 = \begin{bmatrix}1 & x & x^2 & x^3\end{bmatrix}\begin{bmatrix}1 \\ 0 \\ 0 \\ 0 \end{bmatrix},\\
T(x^2) &= 2x = \begin{bmatrix}1 & x & x^2 & x^3\end{bmatrix}\begin{bmatrix}0 \\ 2 \\ 0 \\ 0 \end{bmatrix},\\
T(x^3) &= 3x^2 = \begin{bmatrix}1 & x & x^2 & x^3\end{bmatrix}\begin{bmatrix}0 \\ 0 \\ 3 \\ 0 \end{bmatrix}.
\end{align}
$$
We can then rewrite (2) as
$$
\begin{align}
T({\bf v}) &= \begin{bmatrix}1 & x & x^2 & x^3\end{bmatrix}\left(\alpha_0\cdot \begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \end{bmatrix} + \alpha_1\cdot \begin{bmatrix}1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + \alpha_2\cdot \begin{bmatrix}0 \\ 2 \\ 0 \\ 0 \end{bmatrix} + \alpha_3\cdot \begin{bmatrix}0 \\ 0 \\ 3 \\ 0 \end{bmatrix}\right)\\
&=\begin{bmatrix}1 & x & x^2 & x^3\end{bmatrix} \begin{bmatrix}0 & 1 & 0 & 0\\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \\ 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix}\alpha_0 \\ \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{bmatrix}\\
&=\begin{bmatrix}1 & x & x^2 & x^3\end{bmatrix} \begin{bmatrix}\alpha_1 \\ 2\alpha_2 \\ 3\alpha_3 \\ 0 \end{bmatrix} \\
&= \alpha\cdot 1 + (2\alpha2)\cdot x + (3\alpha_3)\cdot x^2 + 0\cdot x^3.
\end{align}
$$
Therefore, given **coordinates** of ${\bf v}$, $(\alpha_1, \cdots, \alpha_4)$, it is clear that the following matrix-vector multiplication gives the **coordinates** of $T({\bf v})$:
$$
\begin{bmatrix}0 & 1 & 0 & 0\\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \\ 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix}\alpha_0 \\ \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{bmatrix}.
$$
* b) Let $\gamma = \{1, 2x, 3x^2, 4x^3\}$ be another basis of $\mathbb{P}_3(x)$, and we now write $T({\bf v})$ in the basis $\gamma$. We have
$$
\begin{align}
T(1) &= 0 = \begin{bmatrix}1 & 2x & 3x^2 & 4x^3\end{bmatrix}\begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \end{bmatrix},\\
T(x) &= 1 = \begin{bmatrix}1 & 2x & 3x^2 & 4x^3\end{bmatrix}\begin{bmatrix}1 \\ 0 \\ 0 \\ 0 \end{bmatrix},\\
T(x^2) &= 2x = \begin{bmatrix}1 & 2x & 3x^2 & 4x^3\end{bmatrix}\begin{bmatrix}0 \\ 1 \\ 0 \\ 0 \end{bmatrix},\\
T(x^3) &= 3x^2 = \begin{bmatrix}1 & 2x & 3x^2 & 4x^3\end{bmatrix}\begin{bmatrix}0 \\ 0 \\ 1 \\ 0 \end{bmatrix},
\end{align}
$$
and the matrix representation of $T({\bf v})$ is
$$
\begin{bmatrix}0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix}\alpha_0 \\ \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{bmatrix}.
$$
---
### Matrix representation of a linear transformation
:::info
**Definition:**
Let $\beta={v_1, \cdots, v_n}$ be a basis of $V$ and $\gamma={w_1, \cdots, w_m}$ be a basis of $W$, and let $T:V\to W$ be a linear transformation. Then, for each $j$ with $1\le j\le n$, there exists $\{a_{ij}, 1\le i\le m\}$ such that
$$
T(v_j) = \sum^m_{i=1} a_{ij}w_i.
$$
The $m\times n$ matrix $[a_{ij}]$ is called the ***matrix representation*** of $T$ in basis $\beta$ and $\gamma$, and is denoted by $[T]^{\gamma}_{\beta}$.
If $T:V\to V$ with the same basis $\beta$, the matrix representation is denoted by $[T]_{\beta}$.
:::
#### Examples of the matrix representation
* $T: \mathbb{P}_3(x)\to \mathbb{P}_3(x)$ with $T(p) = \frac{d}{dx} p(x)$.
* $\beta = \{1, x, x^2, x^3\}$ and $\gamma = \{1, 2x, 3x^2, 4x^3\}$.
$$
[T]_{\beta} = \begin{bmatrix}0 & 1 & 0 & 0\\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad
[T]^{\gamma}_{\beta} =
\begin{bmatrix}0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}.
$$
* $T: \mathbb{P}_2(x)\to \mathbb{P}_2(x)$ with $T(p) = p'' - 3p' + 3p$.
* $\beta = \{1, x, x^2\}$.
* First approach - We calculate the coefficients of $T(1)$, $T(x)$ and $T(x^2)$ to have
$$
[T]_{\beta} = \begin{bmatrix}3 & -3 & 2\\ 0 & 3 & -6 \\ 0 & 0 & 3 \end{bmatrix}.
$$
* Second approach - Let's define two linear transformations, $D:\mathbb{P}_2(x)\to \mathbb{P}_2(x)$ with $D(p) = p'$ and $I:\mathbb{P}_2(x)\to \mathbb{P}_2(x)$ with $I(p) = p$. It is then clear that
$$
T = D\circ D - 3D + 3I.
$$
The matrix of $D$ and $I$ are
$$
[D]_{\beta} = \begin{bmatrix}0 & 1 & 0\\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{bmatrix}, \quad [I]_{\beta} = \begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix},
$$
therefore
$$
[T]_{\beta} = D^2 - 3D + 3I = \begin{bmatrix}3 & -3 & 2\\ 0 & 3 & -6 \\ 0 & 0 & 3 \end{bmatrix}.
$$
---
### Range and kernel
:::info
**Definition:**
Given $T:V\to W$ be a linear transformation.
* The **range** of $T$ is defined as
$$
\text{range}(T) = \{{\bf w}\in W | T({\bf v})={\bf w}, {\bf v}\in V\}.
$$
* Other names:
$$
\text{range}(T) = \text{Ran}(T) = \text{image}(T) = \text{Im}(T).
$$
* The **kernel** of $T$ is defined as
$$
\text{kernel}(T) = \{{\bf v}\in V | T({\bf v})={\bf 0}\}.
$$
* Other names:
$$
\text{kernel}(T) = \text{Ker}(T) = \text{null}(T) = \text{N}(T).
$$
:::