# Mathematics in Kalman Filter
###### tags: `tutorial` `software` `electrical_system` `NTURT`
## Linear Algebra
### Eigen Value Decomposition
#### How to Calculate:
- $A \vec{u} = \lambda I \vec{u}$
- $(A - \lambda I) \vec{u} = 0$
- $det(A- \lambda I) = 0$
- Then solve the characteristic equation to get $\lambda$
#### From the definition of eigenvalue to the decomposition
- $\pmatrix{A\vec{u_{1}}\ A\vec{u_{2}}\ \ldots\ A\vec{u_{n}}}=
\pmatrix{\lambda_{1}\vec{u_{1}}&\lambda_{1}\vec{u_{2}}&\ldots&\lambda_{n}\vec{u_{n}}}$
- $A\pmatrix{\vec{u_{1}}\ \vec{u_{2}}\ \ldots\ \vec{u_{n}}}=
\pmatrix{\vec{u_{1}}\ \vec{u_{2}}\ \ldots\ \vec{u_{n}}}
\pmatrix{\lambda_{1}&0&\cdots&0\\
0&\lambda_{2}&\cdots&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&\lambda_{n}}$
- let $\pmatrix{\vec{u_{1}}\ \vec{u_{2}}\ldots\vec{u_{n}}}=\bar{U};\ \pmatrix{\lambda_{1}&0&\cdots&0\\
0&\lambda_{2}&\cdots&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&\lambda_{n}}=
\bar{D}$
- $A\bar{U}=\bar{U}\bar{D}$
- $A=\bar{U}\bar{D}\bar{U}^T$ where $\bar{U}^T=\bar{U}^{-1}$
#### Problems
- How about A $\in R^{m*n}$ where $m \not= n$?
- How about A being not full rank?
### Singular Value Decomposition
#### Make Something Possible to Decompose
- Maybe we can't eigenvalue decompose A, but we can definitely decompose $AA^T$
- Suppose $AA^T = UDU^T$
- We can make it $UDU^T = (U\Sigma V^T)(U\Sigma V^T)^T$, where $V$ and $\Sigma$ satisfy $VV^T=I$ and $\Sigma \Sigma^T = D$
- $\Sigma = \pmatrix{\bar{S}\\\bar{0}}$ or $\Sigma = \pmatrix{\bar{S}&\bar{0}}$
- $S = \pmatrix{\sigma_{1}&0&\cdots&0\\
0&\sigma_{2}&\cdots&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&\sigma_{n}}$, where $\sigma$ is a non-negative square root of the eigenvalue of $AA^T$, called **Singular Value**
- We can let $A = U\Sigma V^T$ due to above property, this is **Singular Value Decomposition (SVD)**
- The pic of SVD: 
- The blue dots are singular value (eigenvalues of $AA^T$).
#### Four subspace of a matrix
- Suppose $A = U\Sigma V^T$
- So that $AV = U\Sigma$
- **Think about it for a while, and figur out the following problems.**
- Suppose that A $\in R^{m*n}$ and m>n:
- If $\vec{y}=A\vec{x}$, what is the dimension of three of them?
- Furthermore, what's the dimension of U, $\Sigma$ and V?
- How A maps a vector from one space to another?
- If $\vec{x}$ is a n*1 vector:
- Does $\vec{x}$ in the space spanned by $V$?
- If yes, is $\vec{x}$ spanned by columns or rolls in $V$?
- Could $A\vec{x}$ be $\vec{0}$?
- If yes, those $\vec{x}$ can be spanned by which rolls in V?
- How about those $\vec{x}$ that make $A\vec{x} \not= \vec{0}$? Which rolls in V can span $\vec{x}$?
- We knew that if $A\vec{x}=\vec{0}$, $\vec{x}$ is called in the null space of A; otherwise, it's called in the rank space of A.
- How can we find the nullspace of $A$?
- How can we find the rankspace of $A$?
- How can we find the nullspace of $A^T$?
- How can we find the rankspace of $A^T$?
#### Seudo-Inverse of SVD
- If $\vec{y}=A\vec{x}$, how can we find $\vec{x}$? We can use seudo-inverse incase $A$ is not a m*m matrix.
- The seudo-inverse of $A$ is $A^+=(U\Sigma V^T)^+=V\Sigma^+U^T$
- $\Sigma = \pmatrix{\bar{S}\\\bar{0}}\implies\Sigma^+ = \pmatrix{\bar{S}^+&\bar{0}}$ or $\Sigma = \pmatrix{\bar{S}&\bar{0}}\implies\Sigma^+ = \pmatrix{\bar{S}^+\\\bar{0}}$
- $S^+ = \pmatrix{1/\sigma_{1}&0&\cdots&0\\
0&1/\sigma_{2}&\cdots&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&1/\sigma_{n}}$
- can $\sigma$ be 0 here?
#### Note
- When the elements in the matrix are not real, the transpose in $U$ and $V$ will become conjugate transpose, i.e. $A^+=\bar{V}\Sigma\bar{U}^T$.
- When we expect $A\vec{x}$ very close to 0 but not 0, what will we do to find x? We may choose the v related to the smallest singular value.
- If you have hard time reading this section and answer all the questions, you may refer to [Four Fundamental Subspaces of Linear Algebra](https://blogs.mathworks.com/cleve/2016/11/28/four-fundamental-subspaces-of-linear-algebra/)
## Probability
### Gauss Distribution

### Markov Process
- We treat a sequence ${x_{0},x_{1},\ldots,x_{n}}$ as a process usually when $x_{n} = f(x_{0},x_{1},\ldots,x_{n-1})$
- Therefore we can expand this sequence iteratively, and produce the sequence like a discrete signal.
- When $x_{n} = f(x_{n-1})$ only, we can say it's a Markov Process.
- For further information, please refer to : [馬可夫鏈](https://zh.wikipedia.org/zh-tw/%E9%A9%AC%E5%B0%94%E5%8F%AF%E5%A4%AB%E9%93%BE)
### Bayes Theorem
