# Week 5 Write Up - Tolley
## Eigen Values and Eigen Vectors
### Eigenvector Definition
Returning to square matrices, they are a transformation.
When we compose matrix $A$ with vector $\vec{v}$ to get some vector $\vec{w}$ or:
$$A\vec{v} = \vec{w} $$
A vector that is *scaled* by $A$ is an **eigenvector**. The *amount* to which is is scaled is called an **eigenvalue**, represented by lambda or $\lambda$. Every square matrix has at least one non-zero eigenvector generating eigenvalue (unless it is the zero matrix), such that:
$$
A\vec{v} = \lambda \vec{v}
$$
The spectrum of $A$ is the **collection** of eigenvalues. The spectral radius is the **largest** eigenvalue.
**(Is it largest by absolute value?)**
Eigenvectors/eigenvalues encode the characteristics of the matrix's transformation. Eigenvalues/vectors are specific to the matrix that spawns them. (Though the zero vector meets this criteria, it cannot be a part of a vector-value pair, zero eigenvalues must have non-zero eigenvectors to be a part of a pair. )
If all basis vectors are eigenvectors, it makes the matrix a diagonal matrix and all the diagonal values are the eigenvalues.
### Finding Eigenvalues
Usually done with a computer. However, when we know $\vec{v}$ is an eigenvector, we know:
$$
A\vec{v} = \lambda \vec{v}
$$
Rewriting it, we can get
$$
A\vec{v} - \lambda \vec{v} = 0
$$
Then, we can factor that out to
$$
(A-\lambda I)\vec{v} = 0
$$
Where $I$ is the identity matrix.
We want $\vec{v}$ has to exist as a non-zero vector in the null space of $A$. Thus we know $A-\lambda I$ is *singular*, meaning it is *not invertible* (so it has a zero determinant).
Since we know $|A-\lambda I| = 0$, we know that if
$$
A =
\begin{bmatrix}
a & b \cr
c & d
\end{bmatrix}
$$
then
$$
A - \lambda I =
\begin{bmatrix}
a - \lambda & b \cr
c & d - \lambda
\end{bmatrix}
$$
and
$$
|A - \lambda I| = (a- \lambda)(d-\lambda) -bc = 0
$$
This roughs out to a polynomial equation, making $\lambda$ a chracteristic polynomial. You can see the above equation as
$$
\lambda^2 - (a+d)\lambda + (ad-bc) = 0
$$
You can also simplify the eigenvalue finding formula to
$$
d^2 = m^2 - p
$$
Where $d$ is the distance between the mean value of the diagonal and the eigenvectors, $m$ is the mean value of the diagonal, and $p$ is the matrix's determinant.
### Finding Eigenvectors
Starting with the eigenvalues (which are paired with eigenvectors always).
Remember: $(A-\lambda I)\vec{v} = 0$ and that $\vec{v}$ is in the null space of $A-\lambda I$. (So you know that the determinant of $A-\lambda I$ has to be zero, leading to the above equations. )
Then you do row operations to find ??
I am confused on this explanation.
Eigen values that are distinct have eigen vectors that are linearly independent. These eigen vectors are the basis vectors for the eigenspace defined by their eigenvalues.
### Review of complex numbers
$$\sqrt{-1} = i$$
Numbers can be a combination of real and complex components, *ie* $(2+2i)$, these are complex numbers. They can be written in polar coordinates too:
$$
e^{i \theta} = cos(\theta)+i (sin(\theta))
$$
where $\vec{u}$ is a vector that equals
\begin{pmatrix}cos(\theta)\cr sin(\theta) \cr \end{pmatrix}
### Properties of Eigenvalues/Eigenvectors
A non-zero eigenvector MAY have a zero eigenvalue. If this is the case, then $\vec{v}$ is in the null space of $A$, not just $A- \lambda I$.
The number of zero eigenvalues is the dimension of $N(A)$ and the number of non-zero eigen values equals the rank of $A$.
The *sum of the diagonal values* in square matrix $A$ is equal to the **sum of all the eigenvalues**.
The *product of all the eigenvalues* equals the **determinant** of matrix $A$. (If there is a zero eigenvalue, then you know the determinant is zero!)
For diagonal or triangular matrices, the eigenvalues are the diagonal.
Defective matrices are ones with duplicated eigenvalues. Those eigenvalues have multiplicity.
Antisymmetric matrices with one or more non-zeros on the diagonal have **complex eigenvalues**.
Real symmetric matrices are never defective (unique and real eigen values), they have all real eigenvalues and their eigenvector is orthogonal to $R^n$.
### Composition and Subspaces
If $A$ is a square matrix, it is a transformation. So it can be thought of as a function, with a domain (x) and range (y) (which is the column space of $A$).
Domain, range, and column space are all vector spaces (which are defined by a basis).
Matrices hold vectors that can define a coordinate system, or can be linearly dependent.
### Change of Basis
Assume matrix $A$ is full rank, then you know the matrix's vectors are all linearly independent and that $A$ is invertible.
### Similarity
Matrix transformations happen from right to left, unlike english writing/reading. So $Ax=b$ really reads more as we are taking $x$, transforming it through $A$, into $b$.
For:
$$
A = CBC^{-1}
$$
$C$ is an intvertible matrix. $C$ and $C^{-1}$ are basis changing transformations.
For $A$ and $B$ to be similar, $A$ and $B$ must share a rank, eigenvalues, determinant, trace, power, etc.
BTW:
$$
A^k = CB^kC^{-1}
$$
### Diagonalization
If $A_{nxn}$ (aka a square matrix) where $A$ has $n$ distinct eigenvalues, [which makes the eigenvectors linearly independent (and the vectors basis vectors]), can be decomposed to a matrix that equals $\lambda I$ then is can be diagonalized.
$S$ is the matrix of all the eigenvectors made from the eigenvalues of $A$ and the original vectors. S is invertible.
Similar to similarity:
$$
A = S \Lambda S^{-1}
$$
Where $A$ is diagonalization-able matrix, and $\Lambda = \lambda I$. (Where $I$ is of course, the identity matrix.) $
btw, capital $\lambda$ is $\Lambda$!
Not all matrices have unique eigenvalues.
Diagonalizable matrices represent a special kind of stretch transformation.
Symmertric matrices have orthogonal eigenvectors.
If the eigenvalues for $A$ are less than one then as k increaes, $A$ approaches the zero matrix.
### Markov Process
A particular application of eigenvalues and eigenvectors.
Defined by the following property:
"Given the present, the future does not depend on the past."
In a Markov Process, the probability of transitioning from any state to any other state remains constant. The sum of all the probabilities of moving/not moving states is equal to 1.

Thus, we can build a transition matrix based on rows where one row moves from one possibility to the next. ie:
$$
P =
\begin{bmatrix}
p_{11} & p_{12} & p_{13} \cr
p_{21} & p_{22} & p_{23} \cr
p_{31} & p_{32} & p_{33}
\end{bmatrix}
$$
Where row one is equal to $\vec{s_1}$ to row n is equal to $\vec{s_n}$ and definitionally each row sums to one.
One is alsways an eigenvalue (there are others).
This means $$\lambda_1 = 1$$ and
$$
v_{\lambda_{1}} = \begin{pmatrix}1\cr1\cr1\cr\end{pmatrix}
$$
such that you get $$P\vec{v} = \lambda v$$
and
$$
s_2 = s_{1}P
$$
Where P makes for a right stochastic matrix because P has rows that sum equal one.
On that note, $P^T = T$, the transpose of the transition matrix makes each column equal to one. This makes it a left stochastic matrix.
### Steady State Solution
For some reason, $T$ can be replaced with $T^T$ as they are both the transition/markov matrix ($P$).
*Iff* at some point $T^T v_s = v_s$ you have a steady state solution. This is corresponded with the eigenvalue of 1.
### Forcasting Future States
##### For special cases of T
$$
A^k\vec{v} = \lambda^k\vec{v}
$$
If we have distinct eigen values, then we have some future state $u$, which must be in the column space of $A$ such that $u_k = A^k u_0$, where $u_0$ is the intial value. To find $u_k$, you take $c_1 \lambda^k_1 v_{\lambda_{1}} + c_2 \lambda^k_2 v_{\lambda_{2}}$ Where $c$ is some constnat values which are found by $Ac=u_0$ [$u$ and $c$ are vectors! $A$ is the matrix of eigenvectors!]
### Spectral Decomposition
##### Symmetric Matrix
A matrix that is symmetric around the diagonal. It can only be a square matrix. A symmetric matrix is equal to its transpose.
When you transpose an orthogonal matrix, you get its inverse, its reverse rotation.
When the matrix is symmetrical, the eigenvectors are perpendicular. So there always exists an orthogonal matrix that realigns the eigenvectors to the basis vectors, and vice versa.
You then get an equation that looks awfully similar to the similarity equation!
## Quiz Write Up
#### 1.1
The answer to 1.1 is 'false' because we know $T$ is a transition matrix and they don't go to the zero matrix because they have to have rows that sum to one to represent the probabilities of moving states.
#### 1.2
The answer is 'true' because this is the definition of the vector $u_k$.
#### 1.3
The answer to 1.3 is 'true' because we know that $S$ is full rank, because it has completely unique eigenvalues, and thus eigenvectors, so the vectors span the complete space of where $u_k$ could be.
#### 1.4
The answer is 'true' by the definition of the vector presented.
#### 2.1
The answer is 'false' because a Markvoc Trantision Matrix has rows that sum to '1' to represent the probabilies of moving or not moving states. $D$ doesn't do that.
#### 2.2
The answer is 'true' because we know that $D$ is symmetrical, and symmetric matrices are never defective, meaning they have unique and *real* eigen values.
#### 2.3
The answer is 'true' because the product of the trace (the diagonal) of a square matrix (which in the case of a distance matrix is always zero) is equal to the sum of the eigenvalues of the matrix. That's a special property of matrices.
#### 2.4
The answer is 'false' because rock and hiphop have a relatively high distance from eachother according to $D$. This means that they are not at all similar to eachother in the genres that follow. You can confirm this by looking at the first table under 'Situation 2' and seeing that rock and hiphop only share one value in common.