---
title: Linear Algebra
---
# Linear Algebra 上課筆記
教師 : 成大電機 謝明得老師
書籍 : Linear Algebra and Its Applications SIXTH EDITION by David C. Lay • Steven R. Lay • Judi J. McDonald
<!-- https://www.itread01.com/content/1543674424.html -->
## For All - 名詞定義(按照字母序)
- Augmented Matrix 增廣矩陣
- Coefficient 係數
- Coefficient Matrix 係數矩陣
- Canonical form 標準矩陣表示法
- Echelon form 階梯型矩陣
- Elimination algorithm 高斯消去法
- entry 元
- Equivalent 等價
- Linear Equation 線性方程式
- Linear System(A system of linear equation) 線性系統
- Solution 解
- Solution set 所有解
- Triangular form 三角矩陣
## 1.1 Systems of Linear Equation
### 名詞定義
- Linear Equation 線性方程式
- Coefficient 係數
- 線性系統的係數必定是實數或複數
- Linear System(A system of linear equation) 線性系統
- solution 解
- solution set 所有解
- Equivalent 等價
兩個線性系統等價,當他們有一模一樣的solution set時。
### 解 : 一組/無解/無限多組
- Consistent 有解
- Inconsistent 無解
>
- Exactly one solution
one intersaction 交於一點
- No solution
parallel 平行
- Infinitely many solutions
coincide 重合
### 矩陣定義
- Coefficient Matrix 係數矩陣
- Augmented Matrix 增廣矩陣
- m $\times$ n 矩陣
- m : rows (幾排橫的)
- n : columns (幾排直的)
- Triangular form 三角矩陣
- Elimination algorithm 高斯消去法
- entry 元
- leading entry 領導元
- Canonical form 標準矩陣表示法
- Identity Matrix (*I*)單位矩陣
### 矩陣列運算
1. Replacement 將某行乘上不為0的常數後,再加到另一行
2. Interchange 兩行對調
3. Scaling 乘上不為0的常數
- Row Equivalent
兩個矩陣等價,當他們能夠在經由列運算後,變成相同的矩陣時。
如果兩個增廣矩陣是row equivalent的,那他們必定有同樣solution set。
- 利用矩陣列運算,判斷解的組數
- Existence : consistent or not
- Uniqueness : the solution is unique or not
## 1.2 Row Reduction and Echelon Forms
### 名詞定義&定理
- Echelon form 階梯型矩陣
- All nonzero rows are above any rows of all zeros.
- Each leading entry of a row is in a column to the right of the leading entry of the row above it.
- All entries in a column below a leading entry are zeros.
- Reduced Echelon form 簡化的階梯型矩陣
- The leading entry in each nonzero row is 1.
- Each leading 1 is the only nonzero entry in its column.
- Pivot Position
- Pivot Column
- Pivot 軸元
- A pivot is a nonzero number in a pivot position that is used as needed to create zeros via row operations.
- Thm1 Uniqueness of the Reduced Echelon Form
### Row Reduction Algorithm(RRA)
#### 演算法內容
- Forward phase
1. Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at the top.
2. Select a nonzero entry in the pivot column as a pivot.
3. Use row replacement operations to create zeros in all positions below the pivot.
4. Cover (or ignore) the row containing the pivot position and cover all rows, if any, above it. Apply steps 1–3 to the submatrix that remains. Repeat the process until there are no more nonzero rows to modify.
- Backward phase
1. Beginning with the rightmost pivot and working upward and to the left, create zeros above each pivot. If a pivot is not 1, make it 1 by a scaling operation.
> In step2, a computer program usually selects as a pivot the entry in a column having the largest absolute value. This strategy, called partial pivoting, is used because it reduces roundoff errors in the calculations.
#### 名詞定義
- basic variable 基變項
- The variables corresponding to pivot columns in the matrix are called basic variables. (variables: x,y,z...)
- free variable 自由變項
- 從應用上來看,free variable是可以任意定的。而當free variable定完之後,basic variable的數值就會固定。
## 1.3 Vector Equations
### 名詞定義
Vector 向量
Column Vector 向量
- 只有一個Column的Matrix == Vector
Scalar 純量
Zero Vector 零向量(以"0"表示)
### 符號定義
$R^2$ : 以實數為範圍的二維向量空間
$R^n$ : 以實數為範圍的n維向量空間
### 向量的加法
- Parallelogram Rule 平行四邊形法
### 線性組合
- Linear Combination 線性組合
- y=$c_1v_1+c_2v_2+...+c_nv_n$
- weight 權重
向量表示法的$x_1$**$a$**$_1+x_2$**$a$**$_2+...+x_n$**$a$**$_n=b$
和矩陣表示法的$[$**$a$**$_1$ **$a$**$_2$ ... **$a$**$_n$ $b$$]$有相同的解。
### Span
Vector的linear combination在空間中的意義就是Span,是Span所組成的那些向量所展開的空間
- Span{$v_1, v_2, ..., v_p$}
- Subset of $R^n$
如果b在Span裡面,就代表$x_1$**$v$**$_1+x_2$**$v$**$_2+...+x_n$**$v$**$_n=b$有解
>TO Think:
>Let w1, w2, w3, u, and v be vectors in Rn. Suppose the vectors u and v are in Span {w1, w2, w3}. Show that u + v is also in Span {w1, w2, w3}. [Hint: The solution requires the use of the definition of the span of a set of vectors. It is useful to review
this definition before starting this exercise.]
>ANS:
>Since the vectors u and v are in Span{w1, w2, w3}, there exist scalars c1, c2, c3 and d1, d2, d3 such that $u = c_1$**$w$**$_1+c_2$**$w$**$_2+c_3$**b$w$**$_3$ and $v = d_1$**$w$**$_1+d_2$**$w$**$_2+d_3$**$w$**$_3$ , Notice $u + v$ are also scalars, the vector $u + v$ is in $Span\{w1, w2, w3\}$.
## 1.4 The Matrix Equation Ax=b
### Ax
A: $m \times n$ matrix
x: variable set, being in $R^n$
(Ax只有在"x的數量==n"才會被定義)
### Matrix Equation (Ax=b)
- Matrix Equation 矩陣方程
- Thm3 : Matrix Equation, Vector Equation, Augumented Matrix(Linear System) have the same solution set.
### Existence of Solutions
- "$Ax=b$有解"只有在b是A的其中一個linear combination時才成立。
- b is in Span{$a_1,...,a_n$} == $Ax=b$ consistent
- Thm4:
以下四種表示方式都相同,當其中一條為true/false則另外三條也都是true/false
- For each b in $R^m$, the equation $Ax=b$ has a solution.
- Each b in $R^m$ is a linear combination of the columns of $A$.
- The columns of $A$ span $R^m$.(A充滿整個$R^m$空間中)
- $A$ has a pivot position in every row.(也可以用Independent來看,每一個column的向量都獨立,所以$A$所展開的Span會佈滿整個$R^m$空間)
## 1.5 Solution Sets of Linear Systems
用向量表示法來表示抽象或幾何中的solution sets
### Homogeneous Linear Systems
- $Ax=\vec{0}$
- A:$m\times n$ matrix
- x:variables set in $R^m$
- $\vec{0}$:zero vector in $R^m$
- trivial solution 明顯解
- x=$\vec{0}$ is always one solution of $Ax=b$
- nontrivial solution 非明顯解
- nonzero vector x that satisfied $Ax=0$
- The homogeneous equation $Ax=0$ has a nontrivial solution if and only if the equation has at least one **free variable**.
在算完$Ax=0$的solution$\vec{v}$以後,再找到$Ax=b$的任意一組解p代入,就可以得到$Ax=b$的Solution set$p+t\vec{v}$了
- Solution set
- the solution set of a homogeneous equation $Ax=0$ can always be expressed explicitly as Span{$v_1$, ...,$v_n$}.
- 或許同樣的Span可以用不同的向量組合表示,要注意的是,即使一個向量組合可以表示同一個Span它也不一定是homogeneous equation的解喔!
### Parametric Vector Form
- implicit description
- ax+by+cz=d
- explicit description
- Parametric Vector equation
- $x=s\vec{u}+t\vec{v}$
- Parametric Vector Form
- When a solution set is described explictly with vectors, we say that the solution is in parametric vector form.
- ex. $x=x_3 \vec{v}$ or $x=t\vec{v}$
### Nonhomogeneous Linear System
Homgeneous 的解平移(translated by $\vec{p}$)以後變成NonHomogeneous的解
其中,p是Ax=b的其中一個解(solution set中的其中一點)
## 1.6 Linear Systems的應用(跳過)
## 1.7 Linear Independence
### 名詞定義
- Linear Independent 線性獨立
- An indexed set of vectors {$v_1,...,v_p$} in $R^n$ is said to be linearly independent if the vector equation $x_1v_1+x_2v_2+...+x_pv_p=0$ has only the trivial solution.
- Linear Dependent 線性相依
- An indexed set of vectors {$v_1,...,v_p$} in $R^n$ is said to be linearly dependent if the vector equation $x_1v_1+x_2v_2+...+x_pv_p=0$ has at least one nonzero solution.
- 上述又稱"it is a linear dependence relation among $v_1,...,v_p$".也可以簡稱 $v_1,...v_p$ are linearly dependence
### Linear Independence of Matrix Columns
如果The columns of matrix of A 是 linearly independent,則Ax=0必定只有trivial solution.
### Sets of One or Two Vectors
one vector:
- 單一一個向量會被說是獨立的(因為根據定義)
- 零向量會被說是相依的(因為根據定義)
two vector:
- 檢查兩個向量是否為倍數關係
### Set of Two or More Vectors
- Thm7
- An indexed set $S =$ {$v_1,...,v_p$} of two or more vectors is linearly dependent if and only if at least one of the vectors in $S$ is a linear combination of the others.
- 如果有至少一個向量可以被其他的向量的linear combination組合出來,則他就會是線性相依。
<img src="https://i.imgur.com/I1OAsXI.png" width=400>
- Thm8
- If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {$v_1,...,v_p$} in $R^n$ is linearly dependent if p > n.
- 如果在n次空間中,S包含了p個向量,且p>n,那麼它必定是線性相依的Vector set.
- Thm9
- If a set S in $R^n$ contains the zero vector, then the set is linearly dependent.(按照定義可知)
## 1.8 Linear Transformation
### Transformation
- 換一種角度來看,已知$A_{n\times m}$、$x:R^n$、$b:R^m$ 且 $Ax=b$,我們把Ax=b視為A矩陣將x向量transform成b向量(並且從n維空間transform至m維空間)
- From this new point of view, solving the equation Ax = b amounts to finding all vectors x in $R^n$ that are transformed into the vector b in $R^m$ under the “action” of multiplication by A.
- transformation 轉移
<img src="https://i.imgur.com/0RxuwMl.png" width= 500>
對於transformation $T$,說明以下名詞
- domain : 原本的定義域($R^n$)
- codomain : 轉移後的定義域($R^m$)
- image : x轉移後對應到T(x),則T(x)為x的image
- range : 在codomain裡面,實際的range大小
### Matrix Transformation
$x\mapsto Ax$ : T(x)=Ax
- shear transformation 切變轉換
- maps line segments onto line segments, turns square map onto the vertices of parallelogram.(將方形轉移成平行四邊形)
### Linear transformation
- **Every matrix transformation is a linear transformation.**
- A transformation $T$ is linear if
- $T(u+v)=T(u)+T(v)$ forall $u$,$v$ in the domain of $T$.
- $T(cu)=cT(u)$ for all scalars $c$ and all $u$ in the domain of $T$.
(Linear transformation保留了向量的加法&係數乘法)
- $T(0)=0$(根據定義)
- $T(cu+dv) = cT(u) + dT(v)$
- superposition principle
- - $T(c_1v_1+...+c_pv_p)=c_1T(v_1)+...+c_pT(v_p)$
- contraction 收縮
- dilation 擴張
- rotation transformation (旋轉矩陣)
## 1.9 Matrix Transformation
- 如何求得T(x)的轉移矩陣A
- 當我們知道$\begin{bmatrix} 0\\1 \end{bmatrix}$和$\begin{bmatrix} 1\\0 \end{bmatrix}$經由$T()$以後會轉置至哪裡以後,我們就可以把他們合成出T(x)了。
- $T(x)= Ax$ , 其中$A=\begin{bmatrix}T(e_1)& ... & T(e_n)\end{bmatrix}$ , $e_j$是identity matrix第$j$個column。
- matrix A is called standard matrix for the linear transformation.
### Existence and Uniqueness
- Existence
- At least one.(for all b on codomain, Ax=b have one solution or many solution)
- T is onto $R^m$ when the range of T is all the codomain $R^m$. (codomain的大小不可以大於range的大小,range塞滿整個codomain)
- The mapping T is not onto when there is some $b$ in $R^m$ for which the equation $T(x) = b$ has no solution.
- :::spoiler reflection(鏡射)

:::
- :::spoiler constraction and expansion(伸縮)

:::
- :::spoiler shears

:::
- :::spoiler projection(投影)

:::
- Uniqueness
- At most one.(for all b on codomain, Ax=b 有唯一解或無解)
- one-to-one 一對一對應 : if yes, unique; else, not unique. (假設是one-to-one,則反函數存在)
- thm11
Let T : $R^n \rightarrow R^m$ be a linear transformation. Then $T$ is one-to-one if and only if the equation T(x)=0 has only trivial solution.
(因為T(0)=0)
- thm12
Let $T$ : $R^n \rightarrow R^m$ be a linear transformation, and let $A$ be the standard matrix for $T$.
Then:
- T maps $R^n$ onto $R^m$ if and only if the columns of $A$ span $R^m$.(Existence)
- T is one-to-one if and only if the columns of $A$ are linearly independent.(Uniqueness)
## 2.1 Matrix Operation
- $A=\begin{bmatrix}a_1 a_2 ...a_n\end{bmatrix}$ 其中,$a_1 a_2... a_n$ 為n個column vector
- diagonal 對角線
- main diagonal 主對角線
- diagonal entries : $a_{ii}$的entry
- diagonal matrix : square matrix whose nondiagonal entries are zero.
- zero matrix : all zero
### Matrix addition 加法
- 只有在大小相同時才能相加
- scalar乘法
- 加法交換律/結合律
- 加法反元素 : -A矩陣
- 加法單位元素 : 0矩陣
### Matrix multiplication 乘法
- A(Bx) : x be multiplied by B -> Bx -> Bx be multiplied by A -> ABx
- composition : A(Bx) = (AB)x if $A_{mxn}$ $B_{nxp}$ and $x_{px1}
- 物理意義:向量內積(Inner Product)
- 矩陣乘法可以對應為 composition of linear transformations
- $AB=A \begin{bmatrix}b_1 b_2 b_3...\end{bmatrix}$其中,每個column是$Ab_n$,也就是$A$乘以$b_n$不同的weight,所以可以說是A的column的線性組合。
- 沒有乘法交換律 AB!=BA
- 乘法結合律 (AB)C=A(BC)
- 乘法分配律 A(B+C)=AB+AC(左分配律)(B+C)A=BA+CA(右分配律)
- 乘法單位元素 : $I$($I_m$$A_{m \times n}$=$A_{m \times n}$$I_n$)
- row-column rule
- 若AB=AC,則B=C是錯誤的(因為他的消去法cancellation laws不存在)
- 若AB=0則代表A的row vector和B的column vector正交
### Power of a Matrix
- $A^k$ 其中$A$是$n \times n$的矩陣,$k$是正整數
- $A^0$ is an identical matrix
### Transpose of a matrix 轉置
- $A=$[$a_{ij}$] $\rightarrow$ $A^T=$[$a_{ji}$]
- **把對角線固定,當作鏡子,左右對調**
- 轉置矩陣的特性
- $(A^T)^T=A$
- $(A+B)^T=A^T+B^T$
- $(rA)^T=rA^T$
- $(AB)^T=B^TA^T$
## 2.2 The Inverse of a Matrix
$A^{-1}$
- **invertible** : for $n\times n$ matrix $A$ and $C$,$AC=CA=I$
- inverse must be a **square matrix**
- inverse is **uniquely defined**(B=BI=B(AC)=(BA)C=IC=C)
- singular matrix : 沒有inverse的矩陣
- nonsingular matrix : 有inverse的矩陣
Thm.4
二乘二的矩陣求反矩陣的公式
$A= \begin{bmatrix} a&b\\c&d \end{bmatrix}\quad$ If ad-bc!=0, then A is invertable.
$A^{-1} = detA\begin{bmatrix} d&-b\\-c&a\end{bmatrix}\quad$(行列式值乘上 主對角線對調 副對角線加負號)
- determinant 行列式 : (ad-bc)該矩陣的一個性質,被記為det A.
- invertable : det A!=0 (for 任何大小的方陣)
- Thm5.
如果$A$有反矩陣,那麼$Ax=b$將會有一組唯一解$x=A^{-1}b$(兩邊同乘以反矩陣)
- Thm6.
-如果$A$和$B$都有反矩陣,那麼$AB$也存在反矩陣,並且$(AB)^{-1}=B^{-1}A^{-1}$
$(ABC)^{-1}=C^{-1}B^{-1}A^{-1}$
-如果$A$有反矩陣,那麼$A^T$也有反矩陣,並且$(A^T)^{-1}=(A^{-1})^T$
### Elementary Matrices
elementary matrix : 可以視為一個row operation的矩陣
- elementary matrix 一定是invertible的,The inverse of E is the elementary matrix of the same type that transforms E back into I.
// 如果要linearly independent column的矩陣 則可以選擇Identical matrix.
- Thm7.
$A_{n \times n}$ invertible $\leftrightarrow$ $A$ is row equivalent to $I_n$
- Algorithm for finding $A^{-1}$
$\begin{bmatrix}A &I\end{bmatrix}$ -row equations\> $\begin{bmatrix}I & A^{-1}\end{bmatrix}$,否則$A$沒有反矩陣
## 2.3 Invertible Matrix 的特性
>Note
>從解聯立來看 : 高斯
>從空間來看 : R^n to R^n mapping
- Thm.8 The Invertible Matrix Theorem
Let A be a $n\times n$方陣,那麼以下的statement都是equivalent的(Both true or false)
- A is invertible
- $A^T$ is invertible
- A is row equivalent to $n\times n$單位矩陣
- A has n個 pivot position
- Ax=0 has trivial solution
- Columns of A are linearly independent
- Columns of $A$ span $R^n$
- $Ax=b$ has at least one solution for each $b$ in $R^n$
- - $x\mapsto Ax$ maps $R^n$ to $R^n$
- $x\mapsto Ax$ is one-to-one (The linear Transformation)
### Invertible Linear Transformations
<img src="https://i.imgur.com/294KGLx.png" width=350></img>
if A is invertible : A^-1Ax can be viewed as linear transformation
- Thm.9
如果存在一個S函數可以使得S(T(x))=x,則我們稱S這個函數為inverse of T,記為$T^-1$。
## 2.4 Partitioned Matrices (Matrix Decompostion)
>Note 數學意義
inner product : $A_{1\times2}B_{2\times1}$ -> scalar
outer product : $A_{2\times1}B_{1\times2}$ -> $2\times2$ matrix
- partioned(block) matrix
- blocks/submatrices
- block upper triangular (上三角的partition matrix)
- block diagonal matrix
### Addition of Partitioned Matrices
當A和B是同樣大小且以相同方式切割的話,就可以相加
### Multiplication of Partitioned Matrices
如果要把A和B相乘,首先,A的Column數和B的Row數要相同。並且,A的Column切幾刀,B的Row就要切幾刀(切在相同位置)。
- Thm.10 Column-Row Expansion od AB
- A的n個column的linear combination, 只是x從scalar變成了row vector
<br>
$AB=\begin{bmatrix} col_1(A) & col_2(A) & ... col_n(A)\end{bmatrix} \begin{bmatrix}row_1(B)\\row_2(B)\\.\\.\\.\\row_n(B) \end{bmatrix}$
$=col_1(A)row_1(B)+...+col_n(A)row_n(B)$
### Inverse of Partitioned Matrices
我們希望能透過submatrix的反矩陣來求大matrix的反矩陣
<img src="https://i.imgur.com/uJoPHpk.png" width = 350></img>
透過Partitioned Matrix的乘法,寫出下面四個式子,再分別去解出B的Partitioned Matrix
## 2.5 Factorizations
### LU Factorization
希望在解一系列有相同coefficient matrix $A$的Ax=b時能盡量減少複雜度。
1. 假設$A$是一個$m\times n$的矩陣,並且在將它化簡成echelon form時,不會有interchange的步驟。
2. 則$A$可以被寫作$A=LU$
<img src="https://i.imgur.com/COBhCTB.png" width=350></img>
$L:m\times m$ 對角線元皆為1的下三角矩陣
$U:m\times n$ echelon form的上三角矩陣
其中,L is invertible and is called a unit lower triangular matrix.
- $Ax=b$可以寫為$L(Ux)=b$,Let $y=Ux$,則
$Ly=b$
$Ux=y$
我們可以先解$Ly=b$再解$Ux=y$去解x。這樣會比較好解是因為L跟U都是三角形矩陣。
### LU Factorization Algorithm
假設A可以在只使用row replacement的狀況下,被reduce成echelon form $U$,則每一個row replacement的步驟皆可以被視為一個elementary matrix $E_i$
使得 $E_p ... E_1 A = U$
則 $A=(E_p...E_1)^{-1}U=LU$
$L=(E_p...E_1)^{-1}$
>Note
>Products or Inverses of unit lower triangular matrices are also unit lower triangular matrices.
1. 把$A$ reduce成echelon form $U$ by row replacement operations (也有可能沒辦法ㄛ)
2. Place entries in $L$ 使得 the same row operations reduced L to I.
(簡單來說就是每一次執行row replacement的時候,把$a_ii$除以$a_ki$的除數填進去對應的$L_ik$)
### EE Application
<img src="https://i.imgur.com/arRzgIq.png" width=350></img>
## 2.7 Applications to Computer Graphics
### Homogeneous 2D Coordinates(2D齊次座標)
- $(x,y)$in $R^2$ has homogeneous coordinates (x,y,1) in $R^3$
- Homogeneous coordinate in $R^3$ can be transformed via multiplication by 3x3 matrices.
<img src="https://i.imgur.com/fn1zI22.png" width=350></img>
- Translation平移
- Scaling乘以某個倍數
- Rotation旋轉
<img src="https://i.imgur.com/PTzmk6Y.png" width=500></img>
### Homogeneous 3D Coordinates(3D齊次座標)
- $(x,y,z)$in $R^3$ has homogeneous coordinates (x,y,z,1) in $R^4$
- In general, $(X,Y,Z,H)$ are homogeneous coordinates for (x,y,z), and $x=\frac{X}{H}, y=\frac{Y}{H}, z=\frac{Z}{H}$ if H!=0.
- Homogeneous coordinate in $R^4$ can be transformed via multiplication by 4x4 matrices.
### Perspective Projections
- viewing plane 投影面
- perspective projection 透視投影
- center of projection 投影中心

<!-- 03:52-03:58
(2.8跟2.9跳過)
## Question
center of projection 是指一定是投影到xy plane嗎? -->
## 3.1 Intro of Determinant
- $A_{ij}$的定義(minor of a): 去掉第i個row及第j個column,剩下來的A矩陣
<img src="https://i.imgur.com/VG7iJOV.png" width=150></img>
- $C_{ij}$的定義((i,j)-cofactor of A) : $C_{ij} = (-1)^{i+j} detA_{ij}$
### Determinant 行列式
對於某個方陣A,固定某一row或column,對它展開
假設我們固定在第1個row,則
$detA=a_{11}detA_{11}-a_{12}detA_{12}+...+(-1)^{1+n}a_{1n}detA_{1n}$
接著再繼續遞迴下去,一直到$2\times 2$的$det$或$3 \times 3$的$det$,用公式算出來
- 公式 : $detA=\sum^{n}_{j=1}(-1)^{i+j}a_{ij}detA_{ij}$
- $(-1)^{i+j}$ : 正負可以用數的就好了
<img src="https://i.imgur.com/4kNLVCZ.png" width=150></img>
- Thm2. 如果A是一個triangular matrix,那麼detA為主對角線所有元素的乘積
## 3.2 Prorerties of Determinants
<img src="https://i.imgur.com/HgC1sc1.png" width=450></img>
<img src="https://i.imgur.com/jwqgRN6.png)" width=450></img>
>5.2 p.307
### Row Operations 列運算對det的影響
- Thm3. Row Operations
- replacement : 不會影響矩陣的det(所以我們可以知道Elementary Matrix的determinant必為1)
- interchange : 如果將兩個row交換,則det會加一個負號
- scalling : 如果**單一一個row**乘以k倍,則det乘以k倍
>矩陣乘以常數是對每一個entry都乘,但是對於determinant是指乘以單一一個row
<!-- >(14:20 證明 完全沒在聽XD 投影片p.16) -->
- 在將A變成echelon form之後,
$detA=(-1)^rdetU$
$r$ : row interchange的次數
$U$ : echelon form of $A$
$detU$ : $U$的對角線元素$u_11 u_22...u_{nn}$相乘
若$detA=0$,則代表至少有一個$u_ii$為0,就代表$A$必定有free variable
所以我們可以說,如果$A$是invertible則$detA!=0$則$A$ is linearly independent(rows of $A$).($Ax=0$ have trivial solution)
- Thm.4
Invertible Matrix Theorem 可以加上一項 $detA!=0$
($detA!=0$則columns of A are linearly independent)
- Since row of $A$ are columns of $A^T$, if $detA=0$ 則$A^T$的反矩陣也不存在(投影片p.20)
### Column Operations
- Thm.5
$detA=detA^T$
<!-- >(證明 : 數學歸納法14:50 我又沒在聽證明了XD) -->
所以說,對A做Column Operation對detA的影響跟Row Operation是一樣的
### Determinants and Matrix Product
- Thm.6 Multiplcative Property
If $A$ and $B$ are$n\times n$matrices, then $detAB = (detA)(detB)$
(證明 : 反證法若p則q 若非q則非p)
**Warning : $det(A+B)!=detA+detB$**
## 3.3 Cramer's Rule, Volume, and Linear Transformations
### Cramer's Rule 克拉瑪公式
- $A_i(b)$的定義 : 把A的第i個col換成b向量
- 定理內容 : Ax=b的解x,第i個元素 $x_i=\frac{detA_i(b)}{detA}$
>高中內容:
><img src="https://i.imgur.com/QwHY0lx.png" width=350></img>
>
<!-- >(03:10-30 :SD 20220330_152759) -->
>
>比較常用的時候,其實是證明的時候
>拉普拉斯轉換 / 傅立葉轉換
### Formula for $A^{-1}$ (Crammer's Rule的應用)
- adjugate matrix(共厄矩陣) : matrix 的 cofactor 的 transpose
$adjA=\begin{bmatrix}C_{11}&C_{21}&...&C{n1}\\C{12}&C{22}&...&C{n2}\\.&.&.&.\\C{1n}&C{2n}&...&C{nn}\end{bmatrix}$
- $(adjA)A=$$detA$$I$*
- $A^{-1} = \frac{1}{detA}adjA$
$Ax_j=e_j$ 一次解一個column 並且,對於每一column,我們可以用Cramer's Rule來解$x_j$,就會得到A^{-1}ㄌ
>聽不是很懂 加油啊>< 03:45~04:00
### Determinants as Area or Volume
- Thm.9
2x2的matrix :平行四邊形的體積是|detA|
3x3的matrix :平行六面體的體積是|detA|
- 並且如果A可以被transform成對角線矩陣,那麼A的|detA|會維持不變(就像平行四邊形和長方形之間的關係)
歸正的體積會和傾斜的體積/面積相同,因此我們也可以用歸正的體積/面積去算它的體積/面積
### Linear Transformations
- Thm.10
假設A是一個Linear Transformation T的轉換矩陣(T:$R^2 \rightarrow R^2$ 或 T:$R^3 \rightarrow R^2$)
然後S是一個R^2中的平行四邊形(R^3中的六面體)
那麼T(S)的面積(體積)=|detA|\times S的面積(體積)
## 4.1 Vector Spaces and Subspaces
### Vector Space
Vector Space必須符合 :
1.存在zero vector(zero element)
2.封閉性問題(Closure) :
從Vector Space裡面找兩個元素出來任意相加,並且相加的結果必須仍然在$V$裡面
3.CLosure : 乘以一個scalar,結果必須仍然在$V$裡面
### Subspace
Subsystem of Vector Space V
必須符合 :
1.存在$V$中的zero vector
2.封閉性(Closure) :
Subspace中的任意兩個元素相加的結果必須仍然在Subspace裡面
交換律和結合律不需要特別去檢查(因為他已經繼承了space的運算方式了)
- zero subspace : {0} (只有0向量的集合)
>一條通過原點的線是平面空間($R^2$)的subspace
>(假設沒有通過原點的話,也許可以先平移到通過原點運算完再平移回去)
>一個通過原點的平面是$R^3$的subspace
### Subspace spanned by a set
<img src="https://i.imgur.com/9NAl1LR.png" width=450></img>
- Thm.1
在vector space V中任取p個vector,那麼Span{$v_1$,$v_2$,...$v_p$}是V的一個Subspace
- 已知H是V的一個subspace,則a **spanning set** for H 是一個Span剛好等於H的vector set.
<img src="https://i.imgur.com/ztzBUXP.png" width=450></img>
## 4.2 Null Space, Column Space, and Linear Transformations
### Null Space (Kernel)
(白話文)Null Space : Ax=0的所有解
The Null space of an m$\times$n matrix A:
Nul A = \{x|x屬於R^n and Ax=0\}
- Thm.2
$A_{m \times n}$ 的 Null space 是 R^n的一個subspace
- Nul A 的Span set中的向量數會等於Ax=0 的free variable的數量
### Column Space
Defined via linear combination
Column space of A_{m \times n}
- ColA = 由A的Columns展開的Span{$a_1$,$a_2$,...$a_n$},是$R^m$的subspace
- 因為Ax可以用來表示A的Column vector的combination,所以我們也可以說ColA = {b|b=Ax}($x$屬於$R^n$)
- Btw,ColA同時也是linear transformation$x\mapsto Ax$的range
### Row Space (4.6)
- RowA = 由A的Rows展開的Span{$r_1$,$r_2$,...,$r_m$},是$R^n$的Subspace
- Thm.13
如果A和B是row equivalent,那麼A和B的row space會相同
所以我們可以用A的echelon form(B)來解A的row space
>補充 : Column Space,Nul Space,Row Space要如何看他是$R^n$ : 看他的vector有幾個entries!
### NulA vs. ColA

>0406 16:42~46!
### Linear Transformations
A linear transformation T(x) : 把V vector space裡面的每個向量x轉換到W vector space上的T(x)
並且會符合
1.T(u+v) = T(u)+T(v)
2.T(cu)=cT(u)
Kernal : Null Space of T : the set od all u in V such that T(u)=0
The Range of T = \{T(x)屬於W|x屬於V\}
<img src="https://i.imgur.com/D6RsMKY.png" width=300></img>
>0406 16:54~57!
## 4.3 Linear Independent Sets and Bases
### Basis
構成這個空間裡面,最少的獨立向量
(vector set B 會span出R^k的空間,則我們稱B是R^k的Basis)
嚴謹定義如下 :
Let H be a subspace of a vector space V. An indexed set of vectors B={b1,...,b_p} in V is a basis for H if:
1. B is linearly independent set
2. The subspace spanned by B coincides with H(H = Span{b1,...,b_p})口語意思來說就是H是用B的向量們Span出來的
#### standard basis
{$e_1$, $e_2$, $e_3$, $e_4$, ... , $e_n$} is standard basis for $R^n$ 彼此互相垂直,而且長度為1
{$1$, $t^2$, $t^3$, ... , $t^n$} is standard basis for $P^n$
### Spanning set Theorem
thm:
一個Vector Set中,當其中的向量a會是其他向量的線性組合,那我們就可以把a從這個Vector Set中刪掉,並且不會影響他所展開的空間
### Base of Nul A, Col A, Row A
#### Base of Nul A
記於4.2Nul Space之處
- Nul A 的Span set中的向量數會等於Ax=0 的free variable的數量
#### Base of Col A
Thm.6
pivot columns of A forms a basis of ColA
當一個A矩陣經由row reduction後變成B矩陣,Ax=0和Bx=0的解還會是一樣的solution set,因此我們可以說A矩陣的column vector之間的獨立關係和B矩陣的column vector的獨立關係會相同
!!但A所Span的空間和B所Span的空間不同喔!!
>EX.9
>B的1,3,5向量獨立,所以A的1,3,5向量獨立
#### Base of Row A
如果A和B是Row Equivalent,則他們的Row Space是相同的。(就像Row Vector之間做Linear Combination一樣)
因此我們可以用Echelon form求Row Space的Basis。
>Note : row operations do not preserve the linear dependence relations among the rows of a matrix.
### 結論
- A basis is a spanning set that is as small as possible(We can delete vectors from it until the set becomes linearly independent)
- A basis is a linearly independent set that is as large as possible
- Basis的取法有很多種!
<img src="https://i.imgur.com/mNGm321.png" width=650></img>
## 4.4 Coordinate System
### Unique Representation Theorem
定義甚麼是座標系統,透過Basis的概念來定義R^n空間上的每一個點(假設Basis裡面有n個向量)
我們可以用不同的Basis來呈現空間中同樣一個點,gives a new view to同一個點
- The Unique Representation Theorem: 當我們選定一個Basis以後,某一個點的座標會是唯一的
- <Definition> B-coordinates of x
用向量(座標)的形式來表示某個點在basis底下所得到的座標
<img src="https://i.imgur.com/H68jejQ.png" width=470></img>
$x\mapsto [x]_B$ 中,左邊的"x"是空間中的點一開始的表示法(xyz座標,向量長度為1的座標系統),右邊的$[x]_B$ 則是B-coordinate of x
### Graphical Interpretation of Coordinates
- 可以把B-coordinate想成one-to-one mapping
<img src="https://i.imgur.com/zdYqjws.png" width=450></img>
### Coordinates in R^n
- change-of-coordinates-matrix
If the Basis B={$b_1$, $b_2$, ... , $b_n$} , then $P_B=[b_1, ... , b_n]$
That is, the vector equation $x = c_1b_1 + c_2b_2 + ... + c_nb_n$
is equivalent to $x=P_B[x]_B$
- 座標轉換的概念,$P_B$在這裡是一個從B轉換到standard basis的change-of-coordinatea-matrix
- $P_B$ 是一個方陣(因為是從$R^n$轉換到$R^n$),並且反元素存在(因為independent column vectors)
### Coordinate Mapping
- Thm.9 $B$是一個含有n個向量的Basis,$x \mapsto [x]_B$ 是一個one-to-one的linear transformation from R^n onto R^n
- isomorphism 同構
a one-to-one linear transformation from a vector space $V$ onto a vector space $W$ is called an **isomorphism** from V onto W
- every calculation in V is accurately reproduced in W, and vice cersa.(翻譯 : 兩邊會有一個相對應、等效的運算)
EX.6
<img src="https://i.imgur.com/51kEIxQ.png" width=450></img>
## 4.5 The Dimension of a Vector Space
### Dimension
- 一個Basis含有幾個向量就代表他有幾個Dimension(維度)
- Thm.10 如果一個在Basis B(含有n個向量)中,有一個Vector space V含有p個向量,p>n,則V一定是linearly dependent
- Thm.11 一個Vector Space底下的basis假設有n個向量,那麼任何V的Basis都剛好含有n個向量
- \{0\}:Dim=0
### Subspaces of a Finite-Dimension Space
- Thm.12 假設H是V的subspace,那麼H的dim一定會小於等於dimV。
- Thm.13 The Basis Theorem
假設V是一個p維的Vector Space(p>=1) 則如果我們任取一個set包含p個independent vector,則這個set一定是V的basis
### The Dimensions of Nul A, Col A, and Row A
>Since the pivot columns of a matrix A forms a basis for Col A, we know that dim(Col A) = the number of pivot columns
- Dim of NulA : free variable
- Dim of ColA : pivot column
- Dim of RowA : == Dim of ColA
>Dim of NulA 又稱為 nullity of A
### Rank
- linearly independent columns 數量 in A = Rank = linearly independent columns 數量 of A^T
- Rank = Dim of ColA = Dim of RowA
- Thm.14 The Rank Theorem
Dim of ColA + Dim of NulA = n , A is a mxn matrix
所以我們只需要求出一者的Dim我們就可以知道三者的Dim了
- Rank<=m(因為一個column vector最多就m個entries,不可能有超過m個dimension)
### The Invertible matrix Theorem
<img src="https://i.imgur.com/Ksy6Ing.png" width=480></img>
<img src="https://i.imgur.com/uyFxkUr.png" width=480></img>
## 4.7 Change of Basis
### Change of Basis
- thm.15
<img src="https://i.imgur.com/R2btLWK.png" width=470></img>
$[x]_c =$ 矩陣P $[x]_b$
矩陣P = $[[b_1]_c [b_2]_c [b_3]_c ... [b_n]_c]$
其中,矩陣P被稱為change-of-coordinates matrix from B to C
- P的columns是independent的
- P is invertible
- P(C to B) = [b1 b2|c1 c2]化簡到b1 b2為單位方陣
>ex.
><img src="https://i.imgur.com/B6EU7iL.png" width=270></img>
## 4.8 Applications to Markov Chains
>電機系的應用 :
>Finite state machine : 有限狀態機
> 負回饋
SD20220420_151737
### Markov Chain 馬可夫鏈
名詞定義
- probability vector 機率向量(?) 向量裡的每一個值sum起來等於1
= state vector 目前狀態的向量
- stochastic matrix 轉移矩陣 :
- 很多的probability vector集合起來的,必定是方陣
- 如果一個轉移矩陣contains only strictly positive entries則it is **regular**
- Steady-State Vectors 穩定狀態 / equilibrium vector
- 不管給怎樣的initial state都會得到一樣的Steady state
- Px=x
內容
- Thm.18
如果P是一個regular stochastic matrix,那麼P必定有唯一的穩定狀態
## 5.1 Eigenvector and Eigenvalue
$Ax=\lambda x$ ($A$:mxn $x$:nx1)
A只做了一件事情,伸長或拉短,其中$\lambda$是他的eigenvalue,$x$是eigenvector
- eigenvector 特徵向量
- A矩陣的特徵向量必須滿足特徵方程式"$Ax= \lambda x$"
- is called eigenvector corresponding to $\lambda$
- eigenvalue 特徵值
- 其中,$\lambda$稱作eigenvalue
- eigenspace
已知$A$和$\lambda$,則the eigenspace of A corresponding to $\lambda$ 是 $(A-\lambda I)$的Nulspace,亦即$(A-\lambda I)x=0$的解的集合
在eigenspace裡面的任何一個vector都是$A$的eigenvector
當你要解eigenvector跟eigenvalue時,會將等式化成$(A-\lambda I)x=0$來解,就可以令$A_2 = A-\lambda I$ 解homogeneous
當$A_2$有nontrivial解($detA=0$)的時候,才有eigenvalue $\lambda$
>eigenvector不能是zero vector但是eigenvalue可以是0(因為eigenvector是一個trivial的solution)
>當我們知道某個vector是eigenvector,我們也知道他的eigenvalue,那麼原本要用矩陣乘法運算的東西,我們就可以用一個scalar乘法來取代,計算就會變簡單
假設我們遇到不是eigenvector的向量呢?那我們可以用eigenvector的linear combination來算(假設他在eigenspace底下)。
- thm1.
- 假設A是一個上三角矩陣,那麼他的eigenvalue會是他的主對角線上所有的值。
Proof : $(A-\lambda I)x=0$必須要有trivial解,則必須有free variable,亦即主對角線上必須要有一個或一個以上的0。
<img src="https://i.imgur.com/xQpMdFg.png" width=300></img>
- 0 is an eigenvalue of A ($Ax=0x$) if and only if A is not invertible
(0是A的eigenvalue,也就代表Ax=0有nontrivial $\leftrightarrow$ A沒有反矩陣,detA=0 )
<img src="https://i.imgur.com/6MncVfL.png" width=450></img>
- thm.2
假設 一個矩陣A有eigenvectors $v_1$,$v_2$...$v_r$,並且對應到**不同的**eigenvalue $\lambda_1$,$\lambda_2$...$\lambda_r$,那麼$v_1$~$v_r$一定是linearly independent
但是同一個eigenvalue可能有多個eigenvector喔(不可以回推)
- $x_{k+1} = Ax_k$ (k=0,1,2...) 是一個遞迴關係
假如$Ax_k = \lambda^kx_0$,那麼我們就可以把他寫成一般式了!
> $x_0$ (initial solution) 不一定是eigenvector
4/20 16:14 明得至理名言(?)
<!-- ## 0425筆記 -->
## 5.2 The Characteristic equation
找尋A的eigenvalue,等同於找尋使得det$(A_2)=0$的$\lambda$
### The Characteristic equation 特性方程式
- characteristic equation of A : det$(A-\lambda I)=0$
- characteristic polynomial : 把上式想成**以$\lambda$為變數的方程式**
- multiplicity : 某一個eigenvalue是重根,則他是重根的**幾次方**,他的multiplicity就是多少
>ex. det($A-\lambda I$)=$(5-\lambda)(3-\lambda)(5-\lambda)(1-\lambda)$
### Similarity
- $P^{-1}AP = B$ (P的Inverse和P把A夾起來會等於B),則A相似於B(B也一定相似於A)
- 意即$AP=PB$ 且 $P$ 的inverse存在
- similarity transformation : changing A into $P^{-1}AP$
- thm.4
假如A和B是相似的,則他們有相同的
- characteristic equation (det$(A-\lambda I)$)
- eigenvalues (with same multiplicities)
> Note1 : 有相同eigenvalue的兩個矩陣不一定會是相似關係
> Note2 : Similarity is not the same as row equivalence. 列運算大多會改變eigenvalue
## 5.3 Diagonalization
### 對角化
- diagonalizable :$A$ is similar to diagnal matrix
- $D$ 是一個diagonal matrix,若A is diagonalizable,意即$A=P^{-1}DP$,則$A^k$ 就是 $P^{-1}D^kP$
- thm5. The Diagonalization Theorem
- nxn的A有n個independent的eigenvector $\leftrightarrow$ A is diagonalizable
- the columns of P are n linear independent eigenvectors of $A$. (因為invertible)
- and the diagonal entries of $D$ are eigenvalues of the corresponding eigenvalue of the eigenvectors.
中文 : 相對應位置的eigenvector的eigenvalue會是$d_{ii}$的值。
- eigenvector basis of $R^n$
顧名思義,n個eigenvector span出來的basis
> Think : 所以他會有很多種不同組合的D跟P
- thm.6
nxn matrix with n distinct eigenvalues is diagoalizable
- thm.7
<img src="https://i.imgur.com/fryR1U3.png" width=480></img>
- 如果A是diagonalizable $\leftrightarrow$ 所有的eigenspace聯集是n維的eigenvector basis。 而上述必定滿足 每一個$\lambda_k$所構成的eigenspace的維度必定會等於$\lambda_k$的multiplicity
<img src="https://i.imgur.com/JVK6aA9.png" width=450></img>
<img src="https://i.imgur.com/iUkL9xg.png" width=450></img>
>筆記 : **降維**
<!-- ---書籤--- -->
## 5.4 Eigenvectors and linear transformation
$T$ : $V$->$V$
$T(x)=\lambda x$ 則 $x$ : eigenvector $\lambda$ : eigenvalue
### Matrix of Linear Transformation(LT)
### LT from V into V
(手寫筆記)
- thm.8 Diagonal Matrix Representation
## 6.1 Inner product, Length, and Orthogonality
### Inner Product(Dot Product)
為了定義向量的長度和兩個向量的角度
> equivalent space (移動原點)
兩個向量的內積 $u\cdot v$ = $u^Tv$ = $u_1v_1+u_2v_2+...+u_nv_n$
- thm1 內積的特性
- $u \cdot v$ = $v \cdot u$
- $(u+v)\cdot w = u\cdot w + v\cdot w$
- $u\cdot u \geq 0$ (只有當u=zero vector的時候會等於0)
### Length of a Vector
length/ norm / equivalent distance
(又稱為歐基里德距離 : the Euclidean norm(norm 2) is also called the $L^2$ distance)
- $\lVert v \rVert$ = $\sqrt{v_1^2 + v_2^2 + ... + v_n^2}$
$\lVert v \rVert^2$ = $v\cdot v$
#### norm
- norm-$p$ ($p$ : 實數) : $\sqrt[n]{v_1^n + v_2^n + ... + v_n^p}$
- norm-無窮大 = $v_1$~$v_n$中的maximum
norm-負無窮大 = $v_1$~$v_n$中的minimum
#### unit vector
unit vector : 長度為1的向量
u in the direction of v : $\frac{v}{||v||}$
### Distance in $R^n$
定義兩個向量之間的距離
數線上兩個點之間的距離 : $|a-b|$
空間中兩個點之間的距離 : $|a-b|$
空間中兩個向量之間的距離 : $\lVert u-v\rVert$
u-v : u 和 v 的平行四邊形對角線
### Orthogonal Vectors
perpendicular/othogonal 正交 : 內積為0
- zero vector 跟所有向量正交
- thm.2 畢氏定理($\lVert u\rVert^2 + \lVert v\rVert^2 = \lVert u+v \rVert^2$)
- 向量跟空間正交 : 跟這個空間裡面所有的向量都垂直
- 空間跟空間正交 : A所有的向量跟B所有的向量都垂直
### Orthogonal Complements
$W$ perpendicular ($W ^\perp$) : the set of all vectors $z$ that are orthogonal to $W$.
> 1. $W ^\perp$ 內的vectors 都垂直於W上的所有vectors,且沒有其他vector垂直於W上的所有vectors了
> 2. $W^\perp$ is a subspace of $R^n$
- thm.3
<img src="https://i.imgur.com/uCUtICR.png" width=300></img>
A的row space 正交於 A的null space
- RowA$^\perp$ = NulA
- ColA$^\perp$ = NulA$^\perp$
### Angles in $R^2$ and $R^3$ (Optional)
- $u\cdot v = \lVert u \rVert \lVert v \rVert \cos{\theta}$
<!-- (原理 : 忘了 看中叢 SD_20220509_143548) -->
## 6.2 Orthogonal Sets
### Orthogonal Set
一個set,裡面的vector都independent而且每個vector都orthogonal
- orthogonal basis
$W$ 的 orthogonal basis is a orthogonal set which spans $W$.
- thm.5
<img src="https://i.imgur.com/ESjyJt9.png" width=480></img>
### orthogonal projection
$\hat{y}$ : $y$ 對 $u$ 向量投影的projection
<img src="https://i.imgur.com/nRr7gda.png" width=200></img>
<!-- (黑板SD:20220509_150112) -->
- 我們可以把她generize成對一條線投影
- $\hat{y}$ = proj$_L$ $y = \frac{y\cdot u}{u \cdot u} u$
<img src="https://i.imgur.com/nVMFjmI.png" width=320></img>
- basis dimention>1時,$\hat{y}=\hat{y_1}+\hat{y_2}$
### orthonormal set
orthogonal set of unit vectors(正交且長度為1)
- thm.6
$U^TU=I$ if and only if U is an matrix with orthonormal columns (互相垂直且長度為1)
- thm.7
Let U be a matrix with orthonormal columns
v乘以U以後,v的長度仍然不變
$\lVert Ux \rVert = \lVert v \rVert$
$(Ux) \cdot (Uy) = x \cdot y$
$(Ux) \cdot (Uy) = 0$ if and only if $x \cdot y =0$
總結 : The mapping $x\mapsto Ux$ preserves lengths and orthogonality
### orthogonal matrix(*it's better to be called orthonormal matrix*)
square matrix with orthonormal vector columns
such that $U^{-1} = U^T$ ( $U \cdot U = I$ )
### orthonormal basis(課本6.4)
An orthonormal basis is constructed easily from an orthogonal basis {$v_1$~$v_p$} : simply normalize (i.e., “scale”) all the $v_k$.
## 6.3 Orthogonal Projection (垂直投影)
### orthogonal projection
<img src="https://i.imgur.com/yvEXWEc.png" width=400></img>
- thm.8 the Orthogonal Decomposition Theorem
- 把一個向量分解成兩個向量,一個是你有興趣的向量(用thm5.來求),一個是垂直於你有興趣的向量的向量(有如把v分解成x向量和y向量一樣)
- the decomposition is unique
- the uniqueness of the decompostion shows that the orthogonal projection $\hat{y}$ depends only on $W$, not on the basis {$u_1$...$u_p$}
- thm.9 the Best Approximation Theorem
y的投影點$\hat{y}$,是在W上距離y最近的一點
- $\hat{y}$ is called the best approximation to $y$ by elements of $W$.
<img src=https://i.imgur.com/knZUkXi.png width=230></img>
- thm.10
If { $u_1$~$u_p$ } is an **orthonormal** basis for $W$ (W is a subspace of $R^n$)
then y對W的投影點會等於$(y \cdot u_1)u_1$+...+$(y \cdot u_p)u_p$
上述式子用矩陣來寫的話,可以寫成proj$_wy=UU^Ty$
總結 :
- $U^TUx = I_px =x$ for all $x$ in $R^p$ 是一個
- $UU^Ty =$ proj$_wy$ for all $y$ in $R^n$ 是一個向量(投影點)
<!-- ## ?
p.396.397 -->
<!-- ## 0516 筆記 -->
## 6.4 the Gram-Schmidt Process
用來為$R^n$製造orthogonal或orthonormal basis的演算法
- thm11. the Gram-Schmidt Process
$v_1=x_1$
$v_2=$$x_2$-($x_2$在$v_1$上的正射影)
<img src="https://i.imgur.com/2qw6mVp.png" width=430></img>
- QR factorization
A:{$x_1$ $x_2$ ... $x_p$} ($x_1$~$x_p$ independent)
Q:{$v_1$ $v_2$ ... $v_p$}
R:{係數}
$x_k$=$r_{1k} u_1 + ... + r_{kk} u_k$
R:是一個上三角矩陣,並且對角線為1(因為這裡v1~vp的是orthogonal不是orthonormal)
R is invertible
## 6.5 Least-Squares Problem(LSP)
(求誤差最小的近似解)
making error $\lVert b-Ax \rVert$ as small as possible
- least-square error : $\lVert b-Ax \rVert$
$\hat{b} = A\hat{x}$ : ColA space上離b最近的值
- b對A的column space投影,得到$\hat{b}$,則存在$\hat{x}$ on ColA使得$A\hat{x}=\hat{b}$
然後,A上任取一個向量必定和$b-\hat{b}$垂直,內積為0

- normal equation for Ax=b : $A^TAx=A^Tb$
- $A^T(b-A\hat{x})=0$ , due to $b-A\hat{x}$ is orthogonal to ColA
- (如果Ax=b有解,則那個解就會是normal equation的解)
- thm.13
the set of least-squares solutions of Ax=b coincides(重合) with the nonempty set of solutions of normal equation $A^TAx = A^Tb$.
- thm.14
<img src="https://i.imgur.com/S8BidSd.png" width=480></img>
- thm.15
已知 $\hat{b} = QQ^Tb$ (by thm.10),$A\hat{x}=\hat{b}$
Let $A = QR$ (by QR factorization),則$\hat{x} = R^{-1}Q^{-1}QQ^Tb = R^{-1}Qb$
## 7.1 Diagonalization of Symmetric Matrices
以對角線為鏡面對稱的方陣
- symmertric matrix : a square matrix A such that $A^T = A$
- Thm.1
If $A$ is symmetric, then any two eigenvectors from different eigenspaces are othogonal. ( Since $A^T = A$,$(A^TA)^T = A^TA$ )
- orthogonally diagonalizable : if there exist orthogonal matrix $P$ and diagonal matrix $D$, such that $P^T=P^{-1}$, $A=PDP^{-1} = PDP^T$, $A^T=(PDP^T)^T=PD^TP^T=A$
- Thm.2
An n$\times$n matrix $A$ is orthogonally diagonalizable if and only if $A$ a symmetric matrix.
- Thm.3 the Spectral Theorem for Symmetric Matrices
for n$\times$ n matrix $A$,下列皆為equivalent的關係
- orthogonally diagonalizable
- has n real eigenvalues(包含重根),has n eigenvectors intotal
- different eigenspaces orthogonal to each other
- Spectral Decompostion
<img src="https://i.imgur.com/n6CoWp1.png" width = 350></img>
- $A = \lambda 1 u_1 u_1^T + \lambda 2 u_2 u_2^T + ... + \lambda n u_n u_n^T$
## 7.2 Quadratic Form
### Quadratic Form
a function $Q(x)$, defined on $R^n$, whose value satisfied $Q(x)=x^TAx$. $A$ is a symmetric matrix, called "the matrix of the quadratic form"
ex.
$A =\begin{bmatrix}d_1&a_{12}&a_{13}\\a_{12}&d_2&a_{23}\\a_{13}&a_{23}&d_3\end{bmatrix}$ $x=\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}$
$Q(x)=x^TAx = (d_1)x_1^2+(d_2)x_2^2+(d_3)x_3^2 + (2a_{12})x_1x_2+(2a_{13})x_1x_3+(2a_{23})x_2x_3$
### Change of Variable in a Quadratic Form
Let $x=Py$, $y=P^{-1}x$ and $A=PDP^T$
$\implies x^TAx = (Py)^TA(Py) = y^TP^TAPy = y^TDy$
- Thm.4 The Principal Axes Theorem
$x^TAx = y^TDy$ 時,其中$Q(x)$具有特性「沒有cross-product terms」
### A Geometric View of Principal Axes
- $Q(x) = c$ (c is constant) 的圖形
- 名詞定義
a quadratic form Q is
- positive definite (正定) if Q(x)>0 for all x$\neq 0$
- negative definite (負定) if Q(x)<0 for all x$\neq 0$
- indefinite if Q(x) assumes both positive and negative
* positive semidefine (半正定) if Q(x) $\geq 0$ for all x
* negative semidefine (半負定) if Q(x) $\leq 0$ for all x
- Thm.5 Quadratic forms and Eigenvalues
let A be a symmetric matrix. quadratic form $x^TAx$ is :
a. positive definite $\leftrightarrow$ eigenvalues are all positive
b. negative definite $\leftrightarrow$ eigenvalues are all negative
c. indefinite $\leftrightarrow$ have both positive/negative eigenvalues
(因為Change of variable in quadratic form)
## 7.3 Constrained Optimization
arranging problems so that x varies over the set of unit vectors.(constrainting that x must be an unit vector)
- For Quadratic forms Q(x) who has no cross-product terms : the constrained max and min are the greatest and least eigenvalues.
- Thm.6
min eigenvalue : m, max eigenvalue : M
m$\leq x^TAx \leq$ M 且連續
而當 $x^TAx = \lambda$ 時, x is any one of the unit vectors correponding to $\lambda$
- Thm.7
如果限制條件多加一項,變成$x^Tx=1$ 且 $x^Tu_1=0$,則 max = $\lambda_2$ , x=corresponding eigenvector
- Thm.8
constrain : $x^Tx=1$, $x^Tu_1=0$, ... , $x^Tu_{k-1}=0$
則 max = $\lambda_k$ , x= eigenvectors corresponding to $\lambda_k$
## 7.4 The Singular Value Decomposition

min & max
## Invertible Matix Theorem
<img src="https://i.imgur.com/Ksy6Ing.png" width=480></img>
<img src="https://i.imgur.com/uyFxkUr.png" width=480></img>
<img src="https://i.imgur.com/6MncVfL.png" width=480></img>
<img src="https://i.imgur.com/uUR6HJf.png" width=480></img>
<!-- ## Questions
### Homogeneous
Ex.2 : 它的解的表示方式有無限多種,只要是能用線性組合展開同 一個空間就可以了
:::warning
Question! 意思是只要能表示同一個Span就可以當成答案的意思嗎?
:::
::: info
對喔!
:::
### Span
1. 當我們知道$A$ span $R^m$ 時,並且$A$為$n\times m$的矩陣,那我們可以知道$A$至少有m個independent的column vector嗎?
2. 當我們知道$A$ 有k個independent column vectors的時候,我們可以說$A$會span $R^k$嗎? -->
###### tags: `Linear Algebra` `大二`