---
title: Linear Algebra Note 1
tags: Linear Algebra, 線性代數, 魏群樹, 大學, 國立陽明交通大學, 筆記
---
# Linear Algebra Note 1
## 1. Introduction to Vectors (2021.09.22)
- Define: A vector is a collection of numbers in $\mathbb{R}$
$\vec{V} = \left[\begin{array}{c}
1 \\
1 \\
\end{array}\right],\ \vec{W} = \left[\begin{array}{c}
2 \\
3 \\
\end{array}\right]$
- Operations
1. vector addition
$\vec{V} + \vec{W} = \left[\begin{array}{c}
1 \\
1 \\
\end{array}\right] + \left[\begin{array}{c}
2 \\
3 \\
\end{array}\right] = \left[\begin{array}{c}
1 + 2 \\
1 + 3 \\
\end{array}\right] = \left[\begin{array}{c}
3 \\
4 \\
\end{array}\right]$
2. scalar multipication
$c\vec{V} = c\left[\begin{array}{c}
1 \\
1 \\
\end{array}\right] = \left[\begin{array}{c}
1 \cdot c \\
1 \cdot c \\
\end{array}\right] = \left[\begin{array}{c}
c \\
c \\
\end{array}\right],\ for\ c,\ d \in \mathbb{R}$
- Linear combination
$c\vec{V} + d\vec{W} = c\left[\begin{array}{c}
1 \\
1 \\
\end{array}\right] + d\left[\begin{array}{c}
2 \\
3 \\
\end{array}\right] = \left[\begin{array}{c}
c \\
c \\
\end{array}\right] + \left[\begin{array}{c}
2d \\
3d \\
\end{array}\right] = \left[\begin{array}{c}
c + 2d \\
c + 3d \\
\end{array}\right]$
### Axiom ("Vector Space Laws" taken to be true)
1. Association of addition
$\vec{u} + (\vec{v} + \vec{w}) = (\vec{u} + \vec{v}) + \vec{w}$
2. Commutativity of addition
$\vec{u} + \vec{v} = \vec{v} + \vec{u}$
3. Distributivity of scalar multiplication with respect to vector addition
$a(\vec{u} + \vec{v}) = a\vec{u} + a\vec{v}$
4. Distributivity if scalar multiplication with respect to field(scalar) addition
$(a+b)\vec{v} = a\vec{v} + b\vec{v}$
5. Compatibility of scalar multiplication with field multiplication
$a(b\vec{v}) = (ab)\vec{v}$
6. Identity element of addition
$\vec{V} + \vec{O} = \vec{V}$
> There exists an element $\vec{O}$ in $\mathbb{V}$ (vector space), called the zero vector such that $\vec{V} + \vec{O} = \vec{V}$ for all $\vec{V}$ in $\mathbb{V}$
7. Identity element of scalar multiplication
$1\vec{V} = \vec{V}$
8. Inverse element of addition
$\vec{V} + \vec{(-V)} = \vec{O}$
> If addition for every $\vec{V}$ in $\mathbb{V}$, there exists an element $\vec{(-V)}$ in $\mathbb{V}$, called the additive inverse of $\vec{V}$ such that $\vec{V} + \vec{(-V)} = \vec{O}$
## 2. Solving Linear Equations (2021.09.24 ~ 2021.10.08)
- system of equations → $A \vec{X} = \vec{b}$
> The problem is equivalent to finding a linear combination of column vector of $A$ such that the result is $\vec{b}$
> The coefficients $(x, y)$ are the solution of $\vec{X}$
- 2-variable case
$\begin{split}\left\{\begin{array}{c}
x - 2y = 1 \\
3x + 2y = 11 \\
\end{array}\right.\ &\xrightarrow{matrix\ form} \ \left[\begin{array}{c c}
1 & -2 \\
3 & 2 \\
\end{array}\right] \left[\begin{array}{c}
x \\
y \\
\end{array}\right] = \left[\begin{array}{c}
1 \\
11 \\
\end{array}\right] \\ &\xrightarrow{column\ picture} \ x\left[\begin{array}{c}
1 \\
3 \\
\end{array}\right] + y\left[\begin{array}{c}
-2 \\
2 \\
\end{array}\right] = \left[\begin{array}{c}
1 \\
11 \\
\end{array}\right]\end{split}$
- 3-variable case
$\begin{split}\left\{\begin{array}{c}
x + 2y + 3z = 6 \\
2x + 5y + 2z = 4 \\
6x - 3y + z = 2 \\
\end{array}\right.\ &\xrightarrow{matrix\ form} \ \left[\begin{array}{c c c}
1 & 2 & 3 \\
2 & 5 & 2 \\
6 & -3 & 1 \\
\end{array}\right] \left[\begin{array}{c}
x \\
y \\
z \\
\end{array}\right] = \left[\begin{array}{c}
6 \\
4 \\
2 \\
\end{array}\right] \\ &\xrightarrow{column\ picture} \ x\left[\begin{array}{c}
1 \\
2 \\
6 \\
\end{array}\right] + y\left[\begin{array}{c}
2 \\
5 \\
-3 \\
\end{array}\right] + z\left[\begin{array}{c}
3 \\
2 \\
1 \\
\end{array}\right] = \left[\begin{array}{c}
6 \\
4 \\
2 \\
\end{array}\right]\end{split}$
- Why we use column picture?
- For a given $A$ , the structure of the problem is fixed for all $\vec{b}$
### Gaussian Elimination (by Carl Friedrich Gauss)
$\left\{\begin{array}{c}
x - 2y = 1 \\
3x + 2y = 11 \\
\end{array}\right.\ \xrightarrow{augmented\ matrix} \ \left[\begin{array}{c c|c}
1 & -2 & 1 \\
3 & 2 & 11 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
\end{array}\right.$
$\begin{split}&\left[\begin{array}{c c|c}
\color{red}{1} & -2 & 1 \\
3 & 2 & 11 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
\end{array}\right. \\
\xrightarrow{①\times(-3)+②\ \to\ ②} &\left[\begin{array}{c c|c}
\color{red}{1} & -2 & 1 \\
0 & \color{red}{8} & 8 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
\end{array}\right. \\
\xrightarrow{②\div 8\ \to\ ②} &\left[\begin{array}{c c|c}
\color{red}{1} & -2 & 1 \\
0 & \color{red}{1} & 1 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
\end{array}\right. \\
\xrightarrow{②\times 2+①\ \to\ ①} &\left[\begin{array}{c c|c}
\color{red}{1} & 0 & 3 \\
0 & \color{red}{1} & 1 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
\end{array}\right. \\
\xrightarrow{} &\ \ \ \ \left.\begin{array}{c}
x = 3 \\
y = 1 \\
\end{array}\right.\end{split}$
- Fact: after Gaussian Elimination, $\mathbb{A}$ becomes an upper triangles matrix $\mathbb{U}$, and the augmented matrix become a matrix in ==row echelon form==
> - A matrix is in row echelon form of
> - All rows consisting of only zeros are at the bottom
> - The leading coefficient (pivot) of a none zero row always strictly to the right of the leading coefficient of the row above it
### Gaussian-Jordan Elimination
$\begin{split}Augmented\ matrix &\xrightarrow{Gaussian\ Elimination} Row\ echelon\ form \\ &\xrightarrow{Jordan\ Elimination} Reduced\ row\ echelon\ form\end{split}$
e.g.
$\begin{split}&\left[\begin{array}{c c c|c}
2 & 4 & -2 & 2 \\
4 & 9 & -3 & 8 \\
-2 & -3 & 7 & 10\\
\end{array}\right] \\
\xrightarrow{Gaussian\ Elimination} &\left[\begin{array}{c c c|c}
\color{red}{2} & 4 & -2 & 2 \\
0 & \color{red}{1} & 1 & 4 \\
0 & 0 & \color{red}{4} & 8\\
\end{array}\right] \\
\xrightarrow{Jordan\ Elimination} &\left[\begin{array}{c c c|c}
\color{red}{1} & 0 & 0 & -1 \\
0 & \color{red}{1} & 0 & 2 \\
0 & 0 & \color{red}{1} & 2 \\
\end{array}\right]\end{split}$
- Steps
1. Pivot column: the left most non-zero column
Pivot position: the top position in the pivot column
2. In the pivot column, choose any non-zero entry (element of the matrix) in a row that is not ignored, and perform the appropriate row exchange to bring this entry into the pivot position
3. Perform an appropriate row reduction using the row with the pivot to each lower row to make each entry below into zero
4. Ignore the row with pivot, repeat steps 1.~4. on the submatrix that remains if there is a non-zero row that is ignored
\- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
5. Scale the pivots to make them 1s. Reduce the entries above each pivot into zeros
6. If steps 5 was perform using the first row, stop
Otherwise, repeat step 5
e.g.
$\begin{split}&\left\{\begin{array}{c}
-7x - 3y + 3z = 12 \\
2x + 2y + 2z = 0 \\
-x - 4y + 3z = -9\\
\end{array}\right. \\
\xrightarrow{augmented\ matrix}
&\left[\begin{array}{c c c|c}
-7 & -3 & 3 & 12 \\
2 & 2 & 2 & 0 \\
-1 & -4 & 3 & -9 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{row\ exchange\ ①,\ ②}
&\left[\begin{array}{c c c|c}
\color{red}{2} & 2 & 2 & 0 \\
-7 & -3 & 3 & 12 \\
-1 & -4 & 3 & -9\\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{①\div2\ \to\ ①}
&\left[\begin{array}{c c c|c}
\color{red}{1} & 1 & 1 & 0 \\
-7 & -3 & 3 & 12 \\
-1 & -4 & 3 & -9\\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{①\times 7 + ②\ \to\ ②} &\left[\begin{array}{c c c|c}
\color{red}{1} & 1 & 1 & 0 \\
0 & \color{red}{4} & 10 & 12 \\
-1 & -4 & 3 & -9\\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{① + ③\ \to\ ③}
&\left[\begin{array}{c c c|c}
\color{red}{1} & 1 & 1 & 0 \\
0 & \color{red}{4} & 10 & 12 \\
0 & -3 & 4 & -9\\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{② \times {3 \over 4} + ③\ \to\ ③}
&\left[\begin{array}{c c c|c}
\color{red}{1} & 1 & 1 & 0 \\
0 & \color{red}{4} & 10 & 12 \\
0 & 0 & \color{red}{23 \over 2} & 0\\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{② \times {1 \over 4},\ ③ \times {2 \over 23} }
&\left[\begin{array}{c c c|c}
1 & 1 & 1 & 0 \\
0 & 1 & {5 \over 2} & 3 \\
0 & 0 & \color{red}{1} & 0\\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{③ \times {-5 \over 2} + ②\ \to\ ②}
&\left[\begin{array}{c c c|c}
1 & 1 & 1 & 0 \\
0 & \color{red}{1} & 0 & 3 \\
0 & 0 & \color{red}{1} & 0\\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{③ \times (-1) + ② \times (-1) + ①\ \to\ ①}
&\left[\begin{array}{c c c|c}
\color{red}{1} & 0 & 0 & -3 \\
0 & \color{red}{1} & 0 & 3 \\
0 & 0 & \color{red}{1} & 0\\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\xrightarrow{} &\ \ \ \ \left.\begin{array}{c}
\ \ \ x = -3 \\
y = 3 \\
z = 0 \\
\end{array}\right.\end{split}$
- Operations
3. matrix multiplication
$A = \left[\begin{array}{c c c}
a_1 & a_2 & a_3 \\
a_4 & a_5 & a_6 \\
\end{array}\right]_{2 \times 3},\ B = \left[\begin{array}{c c}
b_1 & b_4\\
b_2 & b_5\\
b_3 & b_6\\
\end{array}\right]_{3 \times 2} \\
AB = \left[\begin{array}{c c c}
a_1 & a_2 & a_3 \\
a_4 & a_5 & a_6 \\
\end{array}\right] \left[\begin{array}{c c}
b_1 & b_4\\
b_2 & b_5\\
b_3 & b_6\\
\end{array}\right] = \left[\begin{array}{c c}
(a_1b_1 + a_2b_2 + a_3b_3) & (a_1b_4 + a_2b_5 + a_3b_6)\\
(a_4b_1 + a_5b_2 + a_6b_3) & (a_4b_4 + a_5b_5 + a_6b_6)\\
\end{array}\right]_{2 \times 2}$
- Properties
- Associative law
$A(BC) = (AB)C$
- Commutative law
$AB \not= BA$
- Distributive law
$A(B+C) = AB + AC$
$(A+B)C = AC + BC$
### Elimination using matrices
Given a matrix $M = \left[\begin{array}{c c c}
m_{11} & m_{12} & m_{13}\\
m_{21} & m_{22} & m_{23}\\
m_{31} & m_{32} & m_{33}\\
\end{array}\right]_{3 \times 3}$
a row vector $\vec{r} = \left[\begin{array}{c c c}
r_1 & r_2 & r_3\\
\end{array}\right]_{1 \times 3}$
$\begin{split}\vec{r}M &= \left[\begin{array}{c c c}
r_1 & r_2 & r_3\\
\end{array}\right] \left[\begin{array}{c c c}
m_{11} & m_{12} & m_{13}\\
m_{21} & m_{22} & m_{23}\\
m_{31} & m_{32} & m_{33}\\
\end{array}\right] \\
&= \left[\begin{array}{c c c}
r_1m_{11}+ & r_1m_{12}+ & r_1m_{13}+\\
r_2m_{21}+ & r_2m_{22}+ & r_2m_{23}+\\
r_3m_{31} & r_3m_{32} & r_3m_{33}\\
\end{array}\right]_{1 \times 3} \\
&= r_1\left[\begin{array}{c c c}
m_{11} & m_{12} & m_{13}\\
\end{array}\right] + r_2\left[\begin{array}{c c c}
m_{21} & m_{22} & m_{23}\\
\end{array}\right] + r_3\left[\begin{array}{c c c}
m_{31} & m_{32} & m_{33}\\
\end{array}\right]\end{split}$
- Revisiting Gaussian Elimination
$\begin{split}&\left[\begin{array}{c c c|c}
2 & 4 & -2 & 2 \\
4 & 9 & -3 & 8 \\
-2 & -3 & 7 & 10 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\to &\left(\begin{array}{c}
① \to ① \\
① \times (-2) + ② \to ② \\
③ \to ③ \\
\end{array}\right) \left[\begin{array}{c c c}
1 & 0 & 0 \\
-2 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right] \left[\begin{array}{c c c|c}
2 & 4 & -2 & 2 \\
4 & 9 & -3 & 8 \\
-2 & -3 & 7 & 10 \\
\end{array}\right] = \left[\begin{array}{c c c|c}
2 & 4 & -2 & 2 \\
0 & 1 & 1 & 4 \\
-2 & -3 & 7 & 10 \\
\end{array}\right] \left.\begin{array}{c}
① \\
② \\
③ \\
\end{array}\right. \\
\to &\left(\begin{array}{c}
① \times {1 \over 2} \to ① \\
② \to ② \\
① + ③ \to ③ \\
\end{array}\right) \left[\begin{array}{c c c}
{1 \over 2} & 0 & 0 \\
0 & 1 & 0 \\
1 & 0 & 1 \\
\end{array}\right] \left[\begin{array}{c c c|c}
2 & 4 & -2 & 2 \\
0 & 1 & 1 & 4 \\
-2 & -3 & 7 & 10 \\
\end{array}\right] = \left[\begin{array}{c c c|c}
1 & 2 & -1 & 1 \\
0 & 1 & 1 & 4 \\
0 & 1 & 5 & 12 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\to &\left(\begin{array}{c}
① \to ① \\
② \to ② \\
③ - ② \to ③ \\
\end{array}\right) \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & -1 & 1 \\
\end{array}\right] \left[\begin{array}{c c c|c}
1 & 2 & -1 & 1 \\
0 & 1 & 1 & 4 \\
0 & 1 & 5 & 12 \\
\end{array}\right] = \left[\begin{array}{c c c|c}
1 & 2 & -1 & 1 \\
0 & 1 & 1 & 4 \\
0 & 0 & 4 & 8 \\
\end{array}\right] \left.\begin{array}{c}
... ① \\
... ② \\
... ③ \\
\end{array}\right. \\
\to &\left(\begin{array}{c}
① \to ① \\
② \to ② \\
③ \times {1 \over 4} \to ③ \\
\end{array}\right) \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & {1 \over 4} \\
\end{array}\right] \left[\begin{array}{c c c|c}
1 & 2 & -1 & 1 \\
0 & 1 & 1 & 4 \\
0 & 0 & 4 & 8 \\
\end{array}\right] = \left[\begin{array}{c c c|c}
1 & 2 & -1 & 1 \\
0 & 1 & 1 & 4 \\
0 & 0 & 1 & 2 \\
\end{array}\right]\end{split}$
- row exchange
use a permutation matrix $P_{ij}$ to exchange the $i^{th}$ and $j^{th}$ row
e.g. $P_{23} = \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 0 & \color{red}{1} \\
0 & \color{red}{1} & 0 \\
\end{array}\right]$
- Summary: Gaussian Elimination of any square matrix can be expressed as multiplication of matrices using only elimination and permutation matrices
### Inverse Matrix
- Identity matrix
$I = \left[\begin{array}{c c c c c}
\color{red}{1} & 0 & \cdots & 0 & 0 \\
0 & \color{red}{1} & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots &\vdots \\
0 & 0 & \cdots & \color{red}{1} & 0 \\
0 & 0 & \cdots & 0 & \color{red}{1} \\
\end{array}\right]$
- Define: $A$ is invertible if there exists a matrix $B$ , s.t. $AB = I$
($B$ is the inverse matrix of $A$ , $B = A^{-1}$)
- Theorem1: If $A$ is invertible, its inverse matrix is unique ($A^{-1}$ is unique)
- Proof:
Assume $A$ has two inverse matrices $B$ and $C$ , $B \not= C$
then $BA = I$ , $CA = I$ , $AB = I$ , $AC = I$
$\implies\ B = BI = B(AC) = (BA)C = IC = C$
$\therefore\ B = C = A^{-1}$ (Proof by contradiction)
- Theorem2: If $A$ and $B$ are invertible square matrices, the $B^{-1}A^{-1} = (AB)^{-1}$
- Proof:
$(AB)B^{-1}A^{-1} = A(BB^{-1})A^{-1} = AIA^{-1} = AA^{-1} = I$
- Similarly, $(ABCD)^{-1} = D^{-1}C^{-1}B^{-1}A^{-1}$ (reverse order)
- Theorem3: $(A^{-1})^{-1} = A$
- Proof:
Given $AA^{-1} = A^{-1}A = I$
$(A\color{red}{B})^{-1} = \color{red}{B}^{-1}A^{-1} \\
(A\color{red}{A^{-1}})^{-1} = I^{-1} = I \implies (\color{red}{A^{-1}})^{-1}A^{-1} = I \\
\begin{split}&[(A^{-1})^{-1}A^{-1}]A = IA \\
\implies &(A^{-1})^{-1}(A^{-1}A) = A \\
\implies &(A^{-1})^{-1}I = A \\
\implies &(A^{-1})^{-1} = A\end{split}$
- Theorem4: If $A$ is invertible, the one and only one solution to $A\vec{x} = \vec{b}$ is $\vec{x} = A^{-1}\vec{b}$
- Proof:
$\begin{split} &A\vec{x} = \vec{b} \\
\implies &A^{-1}A\vec{x} = A^{-1}\vec{b} \\
\implies &I\vec{x} = A^{-1}\vec{b} \\
\implies &\vec{x} = A^{-1}\vec{b}\end{split}$
- Theorem5: Suppose there is a non-zero vector $\vec{x}$ s.t. $A\vec{x} = \vec{O}$ , then $A$ cannot have an inverse, i.e. $A$ is not invertible
- Proof:
Suppose $A$ is invertible,
$\begin{split}&A\vec{x} = \vec{O} \\
\iff &A^{-1}A\vec{x} = A^{-1}\vec{O} \\
\iff &I\vec{x} = \vec{O}\end{split}$
$\therefore \vec{x}$ must be $\vec{O}$ (Proof by contradiction)
- Example
- special cases of matrix inverse
$E_{21} = \left[\begin{array}{c c c}
1 & 0 & 0 \\
-2 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right],\ E_{21}^{-1} = \left[\begin{array}{c c c}
1 & 0 & 0 \\
2 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right] \\
P_{23} = \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}\right],\ P_{23}^{-1} = \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}\right]$
### Gaussian-Jordan Elimination for Finding Inverse Matrix
$AA^{-1} = I,\ A^{-1} = \left[\begin{array}{c c c}
\vec{x_1} & \vec{x_2} & \vec{x_3} \\
\end{array}\right] \\
AA^{-1} = A\left[\begin{array}{c c c}
\vec{x_1} & \vec{x_2} & \vec{x_3} \\
\end{array}\right] = \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right] = I$
$Let\ A = \left[\begin{array}{c c c}
2 & -1 & 0 \\
-1 & 2 & -1 \\
0 & -1 & 2 \\
\end{array}\right]$
This can be decomposed into 3 systems of linear equations with a same $A$
$i.\ A\vec{x_1} = \left[\begin{array}{c}
1 \\
0 \\
0 \\
\end{array}\right],\ ii.\ A\vec{x_2} = \left[\begin{array}{c}
0 \\
1 \\
0 \\
\end{array}\right],\ iii.\ A\vec{x_3} = \left[\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}\right]$
solve the three problems at the same time using Gaussian-Jordan Elimination
---
Augmented matrix $\left[\begin{array}{c|c}
A & I \\
\end{array}\right]$
$\begin{split}= &\left[\begin{array}{c c c|c c c}
2 & -1 & 0 & 1 & 0 & 0 \\
-1 & 2 & -1 & 0 & 1 & 0 \\
0 & -1 & 2 & 0 & 0 & 1 \\
\end{array}\right] \\
\to &\left[\begin{array}{c c c|c c c}
2 & -1 & 0 & 1 & 0 & 0 \\
0 & {3 \over 2} & -1 & {1 \over 2} & 1 & 0 \\
0 & 0 & {4 \over 3} & {1 \over 3} & {2 \over 3} & 1 \\
\end{array}\right] \\
\to &\left[\begin{array}{c c c|c c c}
1 & 0 & 0 & {3 \over 4} & {1 \over 2} & {1 \over 4}\\
0 & 1 & 0 & {1 \over 2} & 1 & {1 \over 2}\\
0 & 0 & 1 & {1 \over 4} & {1 \over 2} & {3 \over 4}\\
\end{array}\right] \\
\to &\ \vec{x_1} = \left[\begin{array}{c}
{3 \over 4} \\
{1 \over 2} \\
{1 \over 4} \\
\end{array}\right],\ \vec{x_2} = \left[\begin{array}{c}
{1 \over 2} \\
1 \\
{1 \over 2} \\
\end{array}\right],\ \vec{x_3} = \left[\begin{array}{c}
{1 \over 4} \\
{1 \over 2} \\
{3 \over 4} \\
\end{array}\right] \\
\implies &\ A^{-1} = \left[\begin{array}{c c c}
\vec{x_1} & \vec{x_2} & \vec{x_3} \\
\end{array}\right] = \left[\begin{array}{c c c}
{3 \over 4} & {1 \over 2} & {1 \over 4}\\
{1 \over 2} & 1 & {1 \over 2}\\
{1 \over 4} & {1 \over 2} & {3 \over 4}\\
\end{array}\right]\end{split}$
- Remarks
Gaussian-Jordan Elimination guarantees to find $A^{-1}$ if $A$ is invertible
since $A^{-1}$ is unique, if Gaussian-Jordan Elimination fails → $A$ is not invertible
### Elimination $\equiv$ Factorization
$LU$ factorization (decomposition) $\equiv$ Gaussian Elimination
$A = LU$ where $L$ is the lower triangular matrix and $U$ is the upper triangular matrix (Assume: $A$ is square matrix and is properly permutated)
$A_{3 \times 3} = \left[\begin{array}{c c c}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33} \\
\end{array}\right] = \left[\begin{array}{c c c}
1 & 0 & 0 \\
l_{21} & 1 & 0 \\
l_{31} & l_{32} & 1 \\
\end{array}\right] \left[\begin{array}{c c c}
u_{11} & u_{12} & u_{13} \\
0 & u_{22} & u_{23} \\
0 & 0 & u_{33} \\
\end{array}\right]$
- Example
$A = \left[\begin{array}{c c c}
2 & 1 & 1 \\
4 & -6 & 0 \\
-2 & 7 & 2 \\
\end{array}\right] \\
E_{32}E_{31}E_{21}A = U = \left[\begin{array}{c c c}
2 & 1 & 1 \\
0 & -8 & -2 \\
0 & 0 & 1 \\
\end{array}\right] \\
E_{21}^{-1}E_{31}^{-1}E_{32}^{-1} = L = \left[\begin{array}{c c c}
1 & 0 & 0 \\
2 & 1 & 0 \\
-1 & -1 & 1 \\
\end{array}\right] \\
\begin{split}E_{21}^{-1}E_{31}^{-1}E_{32}^{-1}E_{32}E_{31}E_{21}A &= E_{21}^{-1}E_{31}^{-1}E_{32}^{-1}U \\
\implies A &= E_{21}^{-1}E_{31}^{-1}E_{32}^{-1}U \\
\implies A &= LU\end{split}$
> $L$: Contains the information of row reduction performed in the Gaussian Elimination with lower triangular elements being the coefficients used
> $U$: Result of Gaussian Elimination with pivots in diagonal (Row echelon form)
### LDU Factorization (Decomposition)
$\begin{split}A = L\color{red}{U} = L\color{red}{DU'} &= \left[\begin{array}{c c c}
1 & 0 & 0 \\
2 & 1 & 0 \\
-1 & -1 & 1 \\
\end{array}\right] \left[\begin{array}{c c c}
2 & 1 & 1 \\
0 & -8 & -2 \\
0 & 0 & 1 \\
\end{array}\right] \\
&= \left[\begin{array}{c c c}
\color{red}{1} & 0 & 0 \\
2 & \color{red}{1} & 0 \\
-1 & -1 & \color{red}{1} \\
\end{array}\right] \left[\begin{array}{c c c}
2 & 0 & 0 \\
0 & -8 & 0 \\
0 & -0 & 1 \\
\end{array}\right] \left[\begin{array}{c c c}
\color{red}{1} & {1 \over 2} & {1 \over 2} \\
0 & \color{red}{1} & {1 \over 4} \\
0 & 0 & \color{red}{1} \\
\end{array}\right]\end{split}$
- Theorem:
If $A = LDU$ and also $A = L_1D_1U_1$ with all factors invertible, then $L = L_1,\ D = D_1,\ U = U_1$, i.e. if a matrix is factorizable, the $LDU$ factorization is unique
- Proof:
$A = LDU = L_1D_1U_1 \\
\begin{split}L^{-1}\color{red}{A}U_1^{-1} &= L^{-1}\color{red}{L_1D_1U_1}U_1^{-1} = L^{-1}\color{red}{LDU}U_1^{-1} \\
&\implies L^{-1}L_1D_1 = DUU_1^{-1} \\
&\text{( }L^{-1}L_1 \text{ is lower triangular, }UU_1^{-1} \text{ is upper triangular, }D \text{ and }D_1 \text{ are both diagonal)} \\
&\ L^{-1}L_1 = I \implies L^{-1}\ is\ L_1^{-1} \implies L = L_1\\
&\ UU_1^{-1} = I \implies U^{-1}\ is\ U_1^{-1} \implies U = U_1\\
&\ D_1 = D\text{ (Proof by contradiction)}\end{split}$
### Transpose and Permutaitions
- Define: For a matrix $A$, we denote by $A^{T}$ as the transpose of $A$, where the column of $A^{T}$ are the rows of $A$, i.e. $[A^{T}]_{ij} \triangleq [A]_{ji}$
- Example
$A = \left[\begin{array}{c c c}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9 \\
\end{array}\right] \to A^{T} = \left[\begin{array}{c c c}
1 & 4 & 7 \\
2 & 5 & 8 \\
3 & 6 & 9 \\
\end{array}\right] \\
B = \left[\begin{array}{c c}
1 & 4 \\
2 & 5 \\
3 & 6 \\
\end{array}\right]_{3 \times 2} \to B^{T} = \left[\begin{array}{c c c}
1 & 2 & 3 \\
4 & 5 & 6 \\
\end{array}\right]_{2 \times 3} \\
D = \left[\begin{array}{c c c c}
d_1 & 0 & \cdots & 0 \\
0 & d_2 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & d_n \\
\end{array}\right]_{n \times n} \to D^{T} = D$
- Properties
1. $(A+B)^{T} = A^T + B^T$
$[[A]_{ij} + [B]_{ij}]^T = [[A+B]_{ij}]^T = [A+B]_{ji} = [A]_{ji} + [B]_{ji} = [A]_{ij}^T + [B]_{ij}^T$
2. $(AB)^T = B^TA^T$
$A_{n \times m}$ and $B_{m \times k}$ , $[AB]_{n \times k}$ , and $[AB]_{k \times n}^{T}$
$B_{k \times m}^T$ and $A_{m \times n}^T$ give $[B^TA^T]_{k \times n}$
$AB = \left[\begin{array}{c}
\vec{a_1} \\
\vec{a_2} \\
\vdots \\
\vec{a_n} \\
\end{array}\right] \left[\begin{array}{c c c c}
\vec{b_1} & \vec{b_2} & \cdots & \vec{b_k} \\
\end{array}\right] = \left[\begin{array}{c c c c}
\vec{a_1}\vec{b_1} & \vec{a_1}\vec{b_2} & \cdots & \vec{a_1}\vec{b_k} \\
\vec{a_2}\vec{b_1} & \vec{a_2}\vec{b_2} & \cdots & \vec{a_2}\vec{b_k} \\
\vdots & \vdots & \ddots & \vdots \\
\vec{a_n}\vec{b_1} & \vec{a_n}\vec{b_2} & \cdots & \vec{a_n}\vec{b_k} \\
\end{array}\right]_{n \times k} \\
(AB)^T = \left[\begin{array}{c c c c}
\vec{a_1}\vec{b_1} & \vec{a_2}\vec{b_1} & \cdots & \vec{a_n}\vec{b_1} \\
\vec{a_1}\vec{b_2} & \vec{a_2}\vec{b_2} & \cdots & \vec{a_n}\vec{b_2} \\
\vdots & \vdots & \ddots & \vdots \\
\vec{a_1}\vec{b_k} & \vec{a_2}\vec{b_k} & \cdots & \vec{a_n}\vec{b_k} \\
\end{array}\right]_{k \times n} \\
B^TA^T = \left[\begin{array}{c}
\vec{b_1}^T \\
\vec{b_2}^T \\
\vdots \\
\vec{b_k}^T \\
\end{array}\right] \left[\begin{array}{c c c c}
\vec{a_1}^T & \vec{a_2}^T & \cdots & \vec{a_n}^T \\
\end{array}\right] = \left[\begin{array}{c c c c}
\vec{b_1}^T\vec{a_1}^T & \vec{b_1}^T\vec{a_2}^T & \cdots & \vec{b_1}^T\vec{a_n}^T \\
\vec{b_2}^T\vec{a_1}^T & \vec{b_2}^T\vec{a_2}^T & \cdots & \vec{b_2}^T\vec{a_n}^T \\
\vdots & \vdots & \ddots & \vdots \\
\vec{b_k}^T\vec{a_1}^T& \vec{b_k}^T\vec{a_2}^T & \cdots & \vec{b_k}^T\vec{a_n}^T \\
\end{array}\right]_{k \times n} \\
\vec{a_1}\vec{b_1} = \left[\begin{array}{c c c c}
a_{1,1} & a_{1,2} & \cdots & a_{1,m} \\
\end{array}\right] \left[\begin{array}{c}
b_{1,1} \\
b_{1,2} \\
\vdots \\
b_{1,m} \\
\end{array}\right] = \sum\limits_{i = 1}^m{(a_{1,i})(b_{1,i})} \\
\vec{b_1}^T\vec{a_1}^T = \left[\begin{array}{c c c c}
b_{1,1} & b_{1,2} & \cdots & b_{1,m} \\
\end{array}\right] \left[\begin{array}{c}
a_{1,1} \\
a_{1,2} \\
\vdots \\
a_{1,m} \\
\end{array}\right] = \sum\limits_{i = 1}^m{(b_{1,i})(a_{1,i})} \\
\implies B^TA^T = (AB)^T$
3. $(A^{-1})^T = (A^{T})^{-1}$
$A^{-1}$ is the inverse of $A$, $A^{-1}A = I$, $I^T = I$ (diagonal)
$(A^{-1}A)^T = I^T = I,\ A^T(A^{-1})^T = I \\
AA^{-1} = I,\ (AA^{-1})^T = I^T = I,\ (A^{-1})^TA^T = I \\
\implies (A^{-1})^T \text{ is the inverse of } A^T \\
\implies (A^T)^{-1} = (A^{-1})^T$
- Define: For a symmetric matrux $A$, $A^T = A$, i.e. $a_ij = a_ji$
- Theorem:
For any matrix $R_{m \times n}$, $R^TR$ and $RR^T$ are both symmetric
- Proof:
$(R^TR)^T = R^T(R^T)^T = R^TR \\
(RR^T)^T = (R^T)^TR^T = RR^T$
- Example
$\text{Let } R = \left[\begin{array}{c c c}
-1 & 1 & 0 \\
0 & -1 & 1 \\
\end{array}\right],\ R^TR = \left[\begin{array}{c c}
-1 & 0 \\
1 & -1 \\
0 & 1 \\
\end{array}\right] \left[\begin{array}{c c c}
-1 & 1 & 0 \\
0 & -1 & 1 \\
\end{array}\right] = \left[\begin{array}{c c c}
1 & -1 & 0 \\
-1 & 2 & -1 \\
0 & -1 & 1 \\
\end{array}\right]$
### Apply LDU factorization to a symmetric matrix $A$
$\begin{split}A = \left[\begin{array}{c c}
1 & 2 \\
2 & 7 \\
\end{array}\right] &= \left[\begin{array}{c c}
1 & 0 \\
2 & 1 \\
\end{array}\right] \left[\begin{array}{c c}
1 & 2 \\
0 & 3 \\
\end{array}\right] \\
&= \left[\begin{array}{c c}
1 & 0 \\
2 & 1 \\
\end{array}\right] \left[\begin{array}{c c}
1 & 0 \\
0 & 3 \\
\end{array}\right] \left[\begin{array}{c c}
1 & 2 \\
0 & 1 \\
\end{array}\right] = LDL^T\end{split}$
- Theorem
For a symmetric $A$ , $A = LDU$ s.t. $U = L^T$
- Proof
Since $A$ is symmetric, $A = A^T$
$A = LDU = (LDU)^T = A^T,\ (LDU)^T = U^TD^TL^T = U^TDL^T \\
\therefore U^TDL^T \text{ is another } LDU \text{ factorization of } A$
since $LDU$ fatorization is unique $\implies L = U^T,\ U = L^T$
$\therefore A = LDU = LDL^T$
### Permutation Matrix
- Define: A permutation matrix $P$ has the rows of $I$ in any order
- Example
$P_{21} = \left[\begin{array}{c c c}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1 \\
\end{array}\right],\ P_{31} = \left[\begin{array}{c c c}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
\end{array}\right],\ P_{32} = \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}\right] \\
P_{21}P_{31} = \left[\begin{array}{c c c}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0 \\
\end{array}\right],\ P_{31}P_{32} = \left[\begin{array}{c c c}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0 \\
\end{array}\right]\, P_{21}P_{32} = \left[\begin{array}{c c c}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{array}\right] \\
P_{21}P_{31}P_{32} = \left[\begin{array}{c c c}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
\end{array}\right] = P_{31} \\
P_{21}P_{21} = \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right] = I$
> There are $6(= 3!)$ ways to permute the rows
> There are $n!$ permutation matrix in $A_{n \times n}$
- Theorem:
$P^{-1} = P^T$
- Example
$P = \left[\begin{array}{c c c}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{array}\right] \left.\begin{array}{c}
...③ \to ① \\
...① \to ② \\
...② \to ③ \\
\end{array}\right.,\ P^{-1} = \left.\begin{array}{c}
① \to ③\ ...\\
② \to ①\ ...\\
③ \to ②\ ...\\
\end{array}\right.\left[\begin{array}{c c c}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0 \\
\end{array}\right]$
> $(PP^T)_{ij}$ is the $i^{th}$ row multiply with the $j^{th}$ column
> Thus, $PP^T = I$, similary $P^TP = I \implies P^{-1} = P^T$
- Example
Compare $L,\ D,$ and $U$ for a symmetric matrix $A$
$\begin{split}A &= \left[\begin{array}{c c c}
a & a & a \\
a & b & b \\
a & b & c \\
\end{array}\right] \\
&\to \left[\begin{array}{c c c}
a & a & a \\
0 & b-a & b-a \\
0 & b-a & c-a \\
\end{array}\right] \\
&\to \left[\begin{array}{c c c}
a & a & a \\
0 & b-a & b-a \\
0 & 0 & c-b \\
\end{array}\right] = U = E_{32}E_{31}E_{21}A \\
L &= (E_{32}E_{31}E_{21})^{-1} \\
&= E_{21}^{-1}E_{31}^{-1}E_{32}^{-1} \\
&= \left[\begin{array}{c c c}
1 & 0 & 0 \\
-1 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right]^{-1} \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 1 & 0 \\
-1 & 0 & 1 \\
\end{array}\right]^{-1} \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & -1 & 1 \\
\end{array}\right]^{-1} \\
&= \left[\begin{array}{c c c}
1 & 0 & 0 \\
1 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right] \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 1 & 0 \\
1 & 0 & 1 \\
\end{array}\right] \left[\begin{array}{c c c}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 1 & 1 \\
\end{array}\right] \\
&= \left[\begin{array}{c c c}
1 & 0 & 0 \\
1 & 1 & 0 \\
1 & 1 & 1 \\
\end{array}\right] \\
A &= \left[\begin{array}{c c c}
1 & 0 & 0 \\
1 & 1 & 0 \\
1 & 1 & 1 \\
\end{array}\right] \left[\begin{array}{c c c}
a & a & a \\
0 & b-a & b-a \\
0 & 0 & c-b \\
\end{array}\right] = \left[\begin{array}{c c c}
1 & 0 & 0 \\
1 & 1 & 0 \\
1 & 1 & 1 \\
\end{array}\right] \left[\begin{array}{c c c}
a & 0 & 0 \\
0 & b-a & 0 \\
0 & 0 & c-b \\
\end{array}\right] \left[\begin{array}{c c c}
1 & 1 & 1 \\
0 & 1 & 1 \\
0 & 0 & 1 \\
\end{array}\right]\end{split}$