A way to approch a eigenvector
▲ green vector try to approch
blue eigenvector
\(A \in \mathbb{R}^{n\times n}\) ,is diagnalizable
and the eigenvalue of \(A\) should be
\[\lambda_1 > \lambda_2 \geq \lambda_3 \geq \dots \geq \lambda_n\]
pick a random \(\vec{x_0} \in \mathbb{R}^n\)
and we do \(\vec{x_{i+1}} = \frac{A\vec{x_i}}{|A\vec{x_i}|}\)
\(\vdots\)
when the number of iteration is large
\(\vec{x_i}\) would converge to a eigenvector corresponding to the \(\lambda_1\)
for \(A\) is diagnalizable the eigenvectors of \(A\)
\(\{\vec{v_1}, \vec{v_2}, \dots ,\vec{v_n}\}\) forms a basis of \(\mathbb{R}^n\)
so the random vector \(\vec{x_0}\) can be represent as
\[\vec{x_0} = c_1\vec{v_1} + c_2\vec{v_2} + \dots + c_n\vec{v_n}\]
what about \(\vec{x_0} \in span(\{\vec{v_2}, \vec{v_3},\dots,\vec{v_n}\})\) ?
\[\begin{aligned} A\vec{x_0} &= A(c_1\vec{v_1} + c_2\vec{v_2} + \dots + c_n\vec{v_n})\\ &= c_1A\vec{v_1} + c_2A\vec{v_2} + \dots + c_nA\vec{v_n}\\ &=c_1\lambda_1\vec{v_1} + c_2\lambda_2\vec{v_2} + \dots + c_n\lambda_n\vec{v_n} \end{aligned}\]
\[ A^2\vec{x_0} = c_1\lambda_1^2\vec{v_1} + c_2\lambda_2^2\vec{v_2} + \dots + c_n\lambda_n^2\vec{v_n}\\ \vdots \]
\[ A^i\vec{x_0} = c_1\lambda_1^i\vec{v_1} + c_2\lambda_2^i\vec{v_2} + \dots + c_n\lambda_n^i\vec{v_n} \]
\[\begin{aligned} A^i\vec{x_0} &= c_1\lambda_1^i\vec{v_1} + c_2\lambda_2^i\vec{v_2} + \dots + c_n\lambda_n^i\vec{v_n}\\ &=\lambda_1^i[c_1\vec{v_1} + c_2(\frac{\lambda_2}{\lambda_1})^i\vec{v_2} + \dots + c_n(\frac{\lambda_n}{\lambda_1})^i\vec{v_n}] \end{aligned}\]
it converges to \(\lambda^i_1c_1\vec{v_1}\)
– Simple
– Fast
– Too many restrictions
the QR-iteration
\(A \in \mathbb{R}^{n\times n}\)
just keep doing QR-decomposition to \(A\)
\[\begin{aligned}
A &= Q_1R_1\\
A_1 &= R_1Q_1
\end{aligned}\]
\[\begin{aligned} A_1 &= Q_2R_2\\ A_2 &= R_2Q_2 \end{aligned}\\ \vdots\]
we collect the \(Q_i\)s during the process
when the number of iteration getting large
\(Q_iQ_{i-1}\cdots Q_2Q_1 = \begin{bmatrix} | & | & & | \\ \vec{v_1} & \vec{v_2} & \cdots & \vec{v_n}\\ | & | & & | \\ \end{bmatrix}\)
where the \(\vec{v_1}, \vec{v_2}, \cdots, \vec{v_n}\) ,
are the eigenvectors of \(A\)
Moreover,when the \(i\) is getting large
\(A\) will converge to \(\begin{bmatrix}
\lambda_1 & & & &\\
& \lambda_2 & & &\\
& & \ddots & &\\
& & & & \lambda_n\\
\end{bmatrix}\)
where \(\lambda_1, \lambda_2, \dots, \lambda_n\) are the eigenvalues of \(A\).
– diagnalize
– solve systems of difference equations
.
.
.
before PageRank Algorithm, some search engine find the "best" website by the frequence of some keywords
PageRank Algorithm find the "best" website by the "linkage" between every websites
matrix represent of this network
![]() |
\(\begin{bmatrix}0 & 1 & \frac{1}{3} & \frac{1}{3} & 1\\\frac{1}{2} & 0 & \frac{1}{3} & \frac{1}{3} & 0\\ 0 & 0 & 0 & \frac{1}{3} & 0\\\frac{1}{2} & 0 & 0 & 0 & 0\\0 & 0 & \frac{1}{3} & 0 & 0\end{bmatrix}\) |
\(\begin{bmatrix} 0 & 1 & \frac{1}{3} & \frac{1}{3} & 1\\ \frac{1}{2} & 0 & \frac{1}{3} & \frac{1}{3} & 0\\ 0 & 0 & 0 & \frac{1}{3} & 0\\ \frac{1}{2} & 0 & 0 & 0 & 0\\ 0 & 0 & \frac{1}{3} & 0 & 0 \end{bmatrix} \begin{bmatrix} |\\ \vec{v}\\ |\\ \end{bmatrix}= \begin{bmatrix} |\\ \vec{v}\\ |\\ \end{bmatrix}\)
\(\begin{bmatrix} |\\ \vec{v}\\ |\\ \end{bmatrix}= \begin{bmatrix} ?\\ ?\\ \vdots\\ ?\\ \end{bmatrix}\)