---
title: Linear Algebra Note 5
tags: Linear Algebra, 線性代數, 魏群樹, 大學, 國立陽明交通大學, 筆記
---
# Linear Algebra Note 5
## Determinants (2021.11.24)
- Properties
9. $\begin{vmatrix}
AB
\end{vmatrix} = \begin{vmatrix}
A
\end{vmatrix}\begin{vmatrix}
B
\end{vmatrix}$
- Recall: $rank(AB) \le min(rank(A),\ rank(B))$
- Proof
Let $A \in \mathbb{R}^{k \times l},\ B \in \mathbb{R}^{l \times m}$
$\forall\ \vec{v} \in C(AB),\ \vec{v} = (AB)\vec{x}$ where $\vec{x}$ is the $m \times 1$ vector
Also, $\vec{v} = A(B\vec{x})$ where $B\vec{x}$ is the $l \times 1$ vector
$\therefore\ \vec{v} \in C(A),\ i.e.\ C(AB) \subseteq C(A) \\
\implies dim(C(AB)) \le dim(C(A)),\ rank(AB) \le rank(A)$
Similary, $rank(AB) \le rank(B)$
1. when $\begin{vmatrix}
B
\end{vmatrix} = 0$ , $B$ is singular
$rank(AB) \le min(rank(A),\ rank(B))$
$\implies AB$ is also singular
$\implies \begin{vmatrix}
AB
\end{vmatrix} = 0 \implies \begin{vmatrix}
AB
\end{vmatrix} = \begin{vmatrix}
A
\end{vmatrix}\begin{vmatrix}
B
\end{vmatrix} = 0$
2. When $\begin{vmatrix}
B
\end{vmatrix} \not= 0$ , Let $D(A) \equiv {\begin{vmatrix}
AB
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}}$
To prove $D(A) = \begin{vmatrix}
A
\end{vmatrix}\ ({\begin{vmatrix}
AB
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = \begin{vmatrix}
A
\end{vmatrix} \implies \begin{vmatrix}
AB
\end{vmatrix} = \begin{vmatrix}
A
\end{vmatrix}\begin{vmatrix}
B
\end{vmatrix})$
show $D(A)$ satisfies (i)-(iii)
==(i)== $\begin{vmatrix}
I_n
\end{vmatrix} = 1$
$D(A=I) = {\begin{vmatrix}
IB
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = {\begin{vmatrix}
B
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = 1 = \begin{vmatrix}
A
\end{vmatrix}$
==(ii)== Exchange of two rows leads to change of sign
$\text{Let } A = \left[\begin{array}{c}
\vec{a_1}^T \\
\vec{a_2}^T \\
\vdots \\
\vec{a_n}^T \\
\end{array}\right],\ A' = \left[\begin{array}{c}
\vec{a_2}^T \\
\vec{a_1}^T \\
\vdots \\
\vec{a_n}^T \\
\end{array}\right],\ B = \left[\begin{array}{c c c c}
\vec{b_1} & \vec{b_2} & \cdots & \vec{b_m} \\
\end{array}\right] \\
AB = \left[\begin{array}{c}
\vec{a_1}^T \\
\vec{a_2}^T \\
\vdots \\
\vec{a_n}^T \\
\end{array}\right]\left[\begin{array}{c c c c}
\vec{b_1} & \vec{b_2} & \cdots & \vec{b_m} \\
\end{array}\right] = \left[\begin{array}{c c c c}
\vec{a_1}^T\vec{b_1} & \vec{a_1}^T\vec{b_2} & \cdots & \vec{a_1}^T\vec{b_m} \\
\vec{a_2}^T\vec{b_1} & \vec{a_2}^T\vec{b_2} & \cdots & \vec{a_2}^T\vec{b_m} \\
\vdots & \vdots & \ddots & \vdots \\
\vec{a_n}^T\vec{b_1} & \vec{a_n}^T\vec{b_2} & \cdots & \vec{a_n}^T\vec{b_m} \\
\end{array}\right] \\
\begin{split}A'B = \left[\begin{array}{c}
\vec{a_2}^T \\
\vec{a_1}^T \\
\vdots \\
\vec{a_n}^T \\
\end{array}\right]\left[\begin{array}{c c c c}
\vec{b_1} & \vec{b_2} & \cdots & \vec{b_m} \\
\end{array}\right] &= \left[\begin{array}{c c c c}
\vec{a_2}^T\vec{b_1} & \vec{a_2}^T\vec{b_2} & \cdots & \vec{a_2}^T\vec{b_m} \\
\vec{a_1}^T\vec{b_1} & \vec{a_1}^T\vec{b_2} & \cdots & \vec{a_1}^T\vec{b_m} \\
\vdots & \vdots & \ddots & \vdots \\
\vec{a_n}^T\vec{b_1} & \vec{a_n}^T\vec{b_2} & \cdots & \vec{a_n}^T\vec{b_m} \\
\end{array}\right] \\
&= \text{row exchange of } AB\end{split} \\
\therefore \begin{vmatrix}
A'B
\end{vmatrix} = -\begin{vmatrix}
AB
\end{vmatrix},\ D(A') = {\begin{vmatrix}
A'B
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = {-\begin{vmatrix}
AB
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = -D(A)$
==(iii)== The determinant is a linear function of each row separately
**(scalar multiplication)**
$A' = \left[\begin{array}{c}
\vec{a_1}^T \\
t\vec{a_2}^T \\
\vdots \\
\vec{a_n}^T \\
\end{array}\right],\ A'B = \left[\begin{array}{c c c c}
\vec{a_1}^T\vec{b_1} & \vec{a_1}^T\vec{b_2} & \cdots & \vec{a_1}^T\vec{b_m} \\
t\vec{a_2}^T\vec{b_1} & t\vec{a_2}^T\vec{b_2} & \cdots & t\vec{a_2}^T\vec{b_m} \\
\vdots & \vdots & \ddots & \vdots \\
\vec{a_n}^T\vec{b_1} & \vec{a_n}^T\vec{b_2} & \cdots & \vec{a_n}^T\vec{b_m} \\
\end{array}\right] \\
\begin{vmatrix}
A'B
\end{vmatrix} = t\begin{vmatrix}
AB
\end{vmatrix} \\
\implies D(A') = {\begin{vmatrix}
A'B
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = {t\begin{vmatrix}
AB
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = tD(A)$
**(vector addition)**
$A'' = \left[\begin{array}{c}
\vec{a_1}^T+\vec{a_1}'^T \\
\vec{a_2}^T \\
\vdots \\
\vec{a_n}^T \\
\end{array}\right],\ A''B = \left[\begin{array}{c c c c}
\vec{a_1}^T\vec{b_1}+\vec{a_1}'^T\vec{b_1} & \vec{a_1}^T\vec{b_2}+\vec{a_1}'^T\vec{b_2} & \cdots & \vec{a_1}^T\vec{b_m}+\vec{a_1}'^T\vec{b_m} \\
\vec{a_2}^T\vec{b_1} & \vec{a_2}^T\vec{b_2} & \cdots & \vec{a_2}^T\vec{b_m} \\
\vdots & \vdots & \ddots & \vdots \\
\vec{a_n}^T\vec{b_1} & \vec{a_n}^T\vec{b_2} & \cdots & \vec{a_n}^T\vec{b_m} \\
\end{array}\right] \\
A''' = \left[\begin{array}{c}
\vec{a_1}'^T \\
\vec{a_2}^T \\
\vdots \\
\vec{a_n}^T \\
\end{array}\right],\ A'''B = \left[\begin{array}{c c c c}
\vec{a_1}'^T\vec{b_1} & \vec{a_1}'^T\vec{b_2} & \cdots & \vec{a_1}'^T\vec{b_m} \\
\vec{a_2}^T\vec{b_1} & \vec{a_2}^T\vec{b_2} & \cdots & \vec{a_2}^T\vec{b_m} \\
\vdots & \vdots & \ddots & \vdots \\
\vec{a_n}^T\vec{b_1} & \vec{a_n}^T\vec{b_2} & \cdots & \vec{a_n}^T\vec{b_m} \\
\end{array}\right] \\
\begin{vmatrix}
A''B
\end{vmatrix} = \begin{vmatrix}
AB
\end{vmatrix} + \begin{vmatrix}
A'''B
\end{vmatrix} \\
\implies D(A'') = {\begin{vmatrix}
A''B
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = {\begin{vmatrix}
AB
\end{vmatrix}+\begin{vmatrix}
A'''B
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = {\begin{vmatrix}
AB
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} + {\begin{vmatrix}
A'''B
\end{vmatrix} \over \begin{vmatrix}
B
\end{vmatrix}} = D(A) + D(A''')$
$D(A)$ satisfies rule (i)-(iii) $\implies D(A) = \begin{vmatrix}
A
\end{vmatrix}$
- $det(A^{-1}) = {1 \over det(A)}$
- Proof
$AA^{-1} = I,\ \begin{vmatrix}
AA^{-1}
\end{vmatrix} = \begin{vmatrix}
A
\end{vmatrix}\begin{vmatrix}
A^{-1}
\end{vmatrix} = \begin{vmatrix}
I
\end{vmatrix} = 1$
10. $\begin{vmatrix}
A^T
\end{vmatrix} = \begin{vmatrix}
A
\end{vmatrix}$
- Proof
1. when $\begin{vmatrix}
A
\end{vmatrix} = 0,\ A$ is singular $\implies A^T$ is singular $\implies \begin{vmatrix}
A^T
\end{vmatrix} = 0$
2. when $\begin{vmatrix}
A
\end{vmatrix} \not= 0$ , apply $PA=LU$ ($P$ = permutation matrix)
$(PA)^T = (LU)^T \implies A^TP^T = U^TL^T$
from 9, $\begin{vmatrix}
P
\end{vmatrix}\begin{vmatrix}
A
\end{vmatrix} = \begin{vmatrix}
PA
\end{vmatrix} = \begin{vmatrix}
LU
\end{vmatrix} = \begin{vmatrix}
L
\end{vmatrix}\begin{vmatrix}
U
\end{vmatrix},\ \begin{vmatrix}
A^T
\end{vmatrix}\begin{vmatrix}
P^T
\end{vmatrix} = \begin{vmatrix}
U^T
\end{vmatrix}\begin{vmatrix}
L^T
\end{vmatrix}$
$\because PP^T = I \implies \begin{vmatrix}
P
\end{vmatrix}\begin{vmatrix}
P^T
\end{vmatrix} = 1$
Moreover, a permutation matrix $P$ can be regarded as s row-exchanged identity matrix
$\begin{vmatrix}
P
\end{vmatrix} = (-1)^k\begin{vmatrix}
I
\end{vmatrix} = 1\ or\ -1 \\
\because \begin{vmatrix}
P
\end{vmatrix}\begin{vmatrix}
P^T
\end{vmatrix} = 1 \implies \begin{vmatrix}
P^T
\end{vmatrix} = \begin{vmatrix}
P
\end{vmatrix}$
Also, $\begin{vmatrix}
L
\end{vmatrix} = 1 = \begin{vmatrix}
L^T
\end{vmatrix}$ due to all-ones diagonal
$\begin{vmatrix}
U
\end{vmatrix} = \begin{vmatrix}
U^T
\end{vmatrix} =$ product of the pivots
$\begin{vmatrix}
P
\end{vmatrix}\begin{vmatrix}
A
\end{vmatrix} = \begin{vmatrix}
L
\end{vmatrix}\begin{vmatrix}
U
\end{vmatrix} = \begin{vmatrix}
U^T
\end{vmatrix}\begin{vmatrix}
L^T
\end{vmatrix} = \begin{vmatrix}
A^T
\end{vmatrix}\begin{vmatrix}
P^T
\end{vmatrix},\ \begin{vmatrix}
P^T
\end{vmatrix} = \begin{vmatrix}
P
\end{vmatrix} \implies \begin{vmatrix}
A
\end{vmatrix} = \begin{vmatrix}
A^T
\end{vmatrix}$
- Summary: How to find determinant (algorithm)
Use Gaussian elimination $PA=LU,\ \begin{vmatrix}
P
\end{vmatrix}\begin{vmatrix}
A
\end{vmatrix} = \begin{vmatrix}
L
\end{vmatrix}\begin{vmatrix}
U
\end{vmatrix}$
$\begin{vmatrix}
P
\end{vmatrix} = \pm 1,\ \begin{vmatrix}
L
\end{vmatrix} = 1,\ \begin{vmatrix}
U
\end{vmatrix} =$ product of the pivots
$\therefore \begin{vmatrix}
A
\end{vmatrix} = (-1)^k\begin{vmatrix}
U
\end{vmatrix} = (-1)^k \times (product\ of\ the\ pivots)$ , $k =$ the number of row exchange
## Eigenvectors & Eigenvalues (2021.11.26 ~ 2021.12.03)
- Example
$A = \left[\begin{array}{c c}
0.8 & 0.3 \\
0.2 & 0.7 \\
\end{array}\right],\ \vec{x_1} = \left[\begin{array}{c}
0.6 \\
0.4 \\
\end{array}\right],\ \vec{x_2} = \left[\begin{array}{c}
1 \\
-1 \\
\end{array}\right] \\
A\vec{x_1} = \left[\begin{array}{c c}
0.8 & 0.3 \\
0.2 & 0.7 \\
\end{array}\right]\left[\begin{array}{c}
0.6 \\
0.4 \\
\end{array}\right] = \left[\begin{array}{c}
0.6 \\
0.4 \\
\end{array}\right] = \vec{x_1} \\
A\vec{x_2} = \left[\begin{array}{c c}
0.8 & 0.3 \\
0.2 & 0.7 \\
\end{array}\right]\left[\begin{array}{c}
1 \\
-1 \\
\end{array}\right] = \left[\begin{array}{c}
0.5 \\
-0.5 \\
\end{array}\right] = 0.5\vec{x_2}$
$A\vec{x_1}$ and $A\vec{x_2}$ lie in the same direction with $\vec{x_1}$ and $\vec{x_2}$
$\text{Therefore, } \begin{split} &A\vec{x_1} = \lambda_1 \vec{x_1},\ \text{where } \lambda_1 = 1 \\
&A\vec{x_2} = \lambda_2 \vec{x_2},\ \text{where }\lambda_2 = 0.5\end{split}$
$\vec{x_1},\ \vec{x_2}$ are called ==eigenvectors== and $\lambda_1,\ \lambda_2$ are the corresponding ==eigenvalues==
- Define(eigenvector and eigenvalue): Let $A$ be a $n \times n$ matrix. A non-zero vector $\vec{x} \in \mathbb{V}$ is called an eigenvector of $A$ if there exists a scalar $\lambda$ such that $A\vec{x} = \lambda\vec{x}$ . This $\lambda$ is called the eigenvalue corresponding to the eigenvector $\vec{x}$
- Theorem
A scalar $\lambda$ is an eigenvalue if and only if $\begin{vmatrix}
A - \lambda I
\end{vmatrix} = 0$
- Proof
If $\vec{x}$ is an eigenvector of $A$ and $\lambda$ is the corresponding eigenvalue
$\begin{split}A\vec{x} = \lambda \vec{x} &\iff A\vec{x} - \lambda \vec{x} = 0 \\
&\iff (A-\lambda I) \vec{x} = 0 \\
&\iff \vec{x} \in N(A-\lambda I)\end{split} \\
\begin{split}\because \vec{x} \not= \vec{0} &\implies N(A-\lambda I) \not= \{ \vec{0} \} \\
&\iff A-\lambda I \text{ is singular } \implies \begin{vmatrix}
A-\lambda I
\end{vmatrix} = 0\end{split}$
- Example
**(Finding $\lambda$)**
$A =\left[\begin{array}{c c}
0.8 & 0.3 \\
0.2 & 0.7 \\
\end{array}\right]$ , Let $\begin{vmatrix}
A-\lambda I
\end{vmatrix} = 0$
$\begin{vmatrix}
\left[\begin{array}{c c}
0.8-\lambda & 0.3 \\
0.2 & 0.7-\lambda \\
\end{array}\right]
\end{vmatrix} = 0.56-1.5\lambda+\lambda^2=0.06 = 0 \\
\lambda^2-1.5\lambda+0.5 = (\lambda - 1)(\lambda - 0.5) = 0 \implies \lambda = 1\ or\ 0.5$
**(Finding eigenvectors)**
Solve $N(A-\lambda I)$ to obtain eigenvectors
$\begin{split}\lambda = 1,\ &A - \lambda I = \left[\begin{array}{c c}
-0.2 & 0.3 \\
0.2 & -0.3 \\
\end{array}\right] \to \left[\begin{array}{c c}
1 & {-3 \over 2} \\
0 & 0 \\
\end{array}\right] \\
&\vec{x_1} = N(A - \lambda I) = \{ c\left[\begin{array}{c c}
{3 \over 2} \\
1 \\
\end{array}\right] \mid c \in \mathbb{R} \}\end{split}$
$\begin{split}\lambda = 0.5,\ &A - \lambda I = \left[\begin{array}{c c}
0.3 & 0.3 \\
0.2 & 0.2 \\
\end{array}\right] \to \left[\begin{array}{c c}
1 & 1 \\
0 & 0 \\
\end{array}\right] \\
&\vec{x_2} = N(A - \lambda I) = \{ c\left[\begin{array}{c c}
-1 \\
1 \\
\end{array}\right] \mid c \in \mathbb{R} \}\end{split}$
- Define(trace): Let $A = \left[\begin{array}{c c c c}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn} \\
\end{array}\right]$ be a $n \times n$ square matrix, $trace(A)$ or $tr(A) \triangleq \sum\limits_{i = 1}^n{a_{ii}}$
- Theorem
Let $A$ be a square matrix and $\lambda_1,\ \lambda_2,\ \cdots,\ \lambda_n$ be its eigenvalues, then
1. $det(A) = \lambda_1 \times \lambda_2 \times \cdots \times \lambda_n$
2. $tr(A) = \sum\limits_{i = 1}^n{\lambda_i}$
- Proof
$A = \left[\begin{array}{c c c c}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn} \\
\end{array}\right],\ A-\lambda I_n = \left[\begin{array}{c c c c}
a_{11}-\lambda & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22}-\lambda & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}-\lambda \\
\end{array}\right] \\
det(A-\lambda I_n) = det(A) + \cdots + \sum\limits_{i=1}^n{a_{ii}(-\lambda)^{n-1}} + (-\lambda)^n$
Let $\lambda_1,\ \lambda_2,\ \cdots,\ \lambda_n$ be the roots of the n-order polynomial
$\begin{split}p(\lambda) &= d(\lambda_1-\lambda)(\lambda_2-\lambda)\cdots(\lambda_n-\lambda) \\
&= d[(\lambda_1 \times \lambda_2 \times \cdots \times \lambda_n) + \cdots + \sum\limits_{i=1}^n{\lambda_i(-\lambda)^{n-1}} + (-\lambda)^n]\end{split}$
match $det(A-\lambda I)$ and $p(\lambda)$
$\implies det(A) = \lambda_1 \times \lambda_2 \times \cdots \times \lambda_n,\ tr(A) = \sum\limits_{i=1}^n{a_{ii}} = \sum\limits_{i=1}^n{\lambda_i}$
- Example (Projection matrix)
$P = \left[\begin{array}{c c}
{1 \over 2} & {1 \over 2} \\
{1 \over 2} & {1 \over 2} \\
\end{array}\right],\ \begin{vmatrix}
P - \lambda I
\end{vmatrix} = \begin{vmatrix}
\left[\begin{array}{c c}
{1 \over 2}-\lambda & {1 \over 2} \\
{1 \over 2} & {1 \over 2}-\lambda \\
\end{array}\right]
\end{vmatrix} = \lambda^2-\lambda = 0 \\
\lambda = 1\ or\ 0 \implies \vec{x_1} = \left[\begin{array}{c}
1 \\
1 \\
\end{array}\right],\ \vec{x_2} = \left[\begin{array}{c}
-1 \\
1 \\
\end{array}\right]$
Note that $P$ is a projection matrix onto a subspace spanned by $\left[\begin{array}{c}
1 \\
1 \\
\end{array}\right]$
$\begin{split}\therefore\ &\forall\ \vec{x_1} \text{ already in this subspace } &P\vec{x_1} = \lambda_1\vec{x_1} = \vec{x_1} \text{ (itself)} \\
&\forall\ \vec{x_2} \text{ orthogonal to this subspace } &P\vec{x_2} = \lambda_2\vec{x_2} = 0\end{split}$
- Example (Rotation matrix)
$Q = \left[\begin{array}{c c}
0 & -1 \\
1 & 0 \\
\end{array}\right]$ rotates a vector $\in \mathbb{R}^2$ by $90^\circ$
check $Q\vec{v} = \left[\begin{array}{c c}
0 & -1 \\
1 & 0 \\
\end{array}\right]\left[\begin{array}{c}
a \\
b \\
\end{array}\right] = \left[\begin{array}{c}
-b \\
a \\
\end{array}\right]$
$(Q\vec{v})^T\vec{v} = \left[\begin{array}{c}
-b \\
a \\
\end{array}\right]^T\left[\begin{array}{c}
a \\
b \\
\end{array}\right] = -ba+ab = 0$ (orthogonal)
**(Find $\lambda$)**
$\begin{vmatrix}
Q-\lambda I
\end{vmatrix} = \begin{vmatrix}
\left[\begin{array}{c c}
-\lambda & -1 \\
1 & -\lambda \\
\end{array}\right]
\end{vmatrix} = \lambda^2+1 = 0,\ \lambda = \pm i$
There doesn't exist any real number $\lambda$ and vector $\vec{x}$ s.t. $Q\vec{x} = \lambda \vec{x}$
since **NO** vector stay in the same direction after rotation by degrees other than $n\pi,\ n \in \mathbb{Z}$