# Brief Description of "Eigenvectors from Eigenvalues" Written by Joseph M. G. Tsai Nov. 18th, 2019 Reference: https://arxiv.org/abs/1908.03795 This theory was published by Peter B. Denton, Stephen J. Parke, Terence Tao, and Xining Zhang on Aug 10th, 2019. Denton, Parke, and Zhang are theoretical physicists, and they occasionally discovered this theory during their researches for neutrino oscillation. Tao is a mathematician and received the Fields Medal in 2006. This theory is applied to Hermitian matrices, which are widely used in many science and engineering areas, including physics, electrical engineering, mechanical engineering, ... etc. The most powerful part of this theory is that we can directly calculate the amplitude of each element of eigenvectors without knowing the value of individual matrix elements but only from the distribution of matrix eigenvalues. This theory is so simple and so fundamental that every mathematician don't believe that it is real in their first heard, because the modern linear algebra system has been well-established since early 19th century, and almost all fundamental theories are expected being already found in the past. Thus, when Denton, Parke, and Zhang first introduced this theory to Tao a few months ago, Tao didn't believe that it is true at that time, much less Denton, Parke, and Zhang are not mathematicians but physicists. To inspect into matrix elementwise point of view more intuitively, I have rewritten this theory and try to let the proof being easy enough and able to be handled by talented collage freshmen or sophomore students who have already took the course of linear algebra ## Properties and Notations * Let $\mathbf A$ be an $n \times n$ Hermitian matrix (i.e. $\mathbf A^* = \mathbf A$), then it has some well known properties: * All the eigenvalues of $\mathbf A$ are real numbers, and denoted by $\lambda_1, \lambda_2, \cdots, \lambda_n$ with corresponding **normed** eigenvectors $\mathbf v_1, \mathbf v_2, \cdots, \mathbf v_n$. That is, for each $i = 1, \cdots, n$, we have $\mathbf A \mathbf v_i = \lambda_i \mathbf v_i \tag{1}$ where $\lambda_i$ is a real number, and $\left \Vert \mathbf v_i \right \Vert = 1$ * $\left\{ \mathbf v_1, \mathbf v_2, \cdots, \mathbf v_n \right\}$ forms an othonormal basis. That is, let $\mathbf V = \begin{bmatrix} \mathbf v_1 & \mathbf v_2 & \cdots & \mathbf v_n \end{bmatrix} \tag{2}$ then we have $\mathbf V^{-1} = \mathbf V^* \tag{3}$ $\mathbf V \mathbf V^* = \mathbf V^* \mathbf V = \mathbf I_n \tag{4}$ $\mathbf v_i^* \mathbf v_j = \begin{cases} 1, \text{ if } i = j \\ 0, \text{ if } i \neq j \end{cases} \tag{5} \label{5}$ * Furthermore, we can shift the phase of a certain $\mathbf v_i$ to satisfy that $\det \left( \mathbf V \right) = 1 \tag{6} \label{6}$ In the language of Lie Groups, we said that $\mathbf V \in \mathrm{SU}(n)$ * $\mathbf A$ can be diagnized as $\mathbf A = \mathbf V \mathbf \Lambda \mathbf V^* \tag{7}$ where $\mathbf \Lambda = \begin{bmatrix} \lambda_1 & & & \\ & \lambda_2 & & \\ & & \ddots & \\ & & & \lambda_n \end{bmatrix} \tag{8}$ * For each $\mathbf v_i$, we expand it as $\mathbf v_i = \begin{bmatrix} v_{1,i} \\ v_{2,i} \\ \vdots \\ v_{n,i} \end{bmatrix}$, thus we can express $\mathbf V$ elementwise as $\mathbf V = \begin{bmatrix} v_{1,1} & v_{1,2} & \cdots & v_{1,n} \\ v_{2,1} & v_{2,2} & \cdots & v_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ v_{n,1} & v_{n,2} & \cdots & v_{n,n} \end{bmatrix} \tag{9}$ * Let $\mathbf M_i = \mathbf A_{ii}$ be an $(n-1) \times (n-1)$ matrix which the $i$th row and the $i$th column are removed from $\mathbf A$, that is $\mathbf M_i = \begin{bmatrix} a_{1,1} & \cdots & a_{1,i-1} & a_{1,i+1} & \cdots & a_{1,n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{i-1,1} & \cdots & a_{i-1,i-1} & a_{i-1,i+1} & \cdots & a_{i-1,n} \\ a_{i+1,1} & \cdots & a_{i+1,i-1} & a_{i+1,i+1} & \cdots & a_{i+1,n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & \cdots & a_{n,i-1} & a_{n,i+1} & \cdots & a_{n,n} \\ \end{bmatrix} \tag{10}$ * Since $\mathbf M_j$ is also a Hermitian matrix, its eigenvalues are also real numbers, denoted as $\eta_{i,1}, \eta_{i,2}, \cdots, \eta_{i,n-1}$ * Denote $\mathbf V_{ij}$ be an $(n-1) \times (n-1)$ matrix which the $i$th row and the $j$th column are removed from $\mathbf V$, that is $\mathbf V_{ij} = \begin{bmatrix} v_{1,1} & \cdots & v_{1,j-1} & v_{1,j+1} & \cdots & v_{1,n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{i-1,1} & \cdots & v_{i-1,j-1} & v_{i-1,j+1} & \cdots & v_{i-1,n} \\ v_{i+1,1} & \cdots & v_{i+1,j-1} & v_{i+1,j+1} & \cdots & v_{i+1,n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{n,1} & \cdots & v_{n,j-1} & v_{n,j+1} & \cdots & v_{n,n} \\ \end{bmatrix} \tag{11}$ * Denote $\mathbf U_i$ be an $(n-1) \times n$ matrix which the $i$th row is removed from $\mathbf V$, that is $\mathbf U_i = \begin{bmatrix} v_{1,1} & \cdots & v_{1,n} \\ \vdots & \ddots & \vdots \\ v_{i-1,1} & \cdots & v_{i-1,n} \\ v_{i+1,1} & \cdots & v_{i+1,n} \\ \vdots & \ddots & \vdots \\ v_{n,1} & \cdots & v_{n,n} \end{bmatrix} \tag{12}$ * From the fact that $\mathbf V \mathbf V^* = \mathbf I_n$, by direct elementwise matrix multiplication, we have the following equation $\mathbf U_i \mathbf U_i^* = \mathbf I_{n-1} \tag{13} \label{13}$ * Furthermore, since $\mathbf V \mathbf \Lambda \mathbf V^* = \mathbf A$, by direct elementwise matrix multiplication, we also have the following equation $\mathbf U_i \mathbf \Lambda \mathbf U_i^* = \mathbf M_i \tag{14} \label{14}$ ## Lemma #1 The following equality holds for each $i, j = 1, \cdots, n$ $\det \left( \mathbf V_{ij} \right) = \left( -1 \right)^{i+j} v_{ij}^* \tag{15} \label{15}$ Actually, this lemma is a basic property of the Lie group $\mathrm {SU}(n)$ ### Proof * Apply equation $\eqref{5}$, we have $\begin{align} &\mathbf V^* \cdot \begin{bmatrix} \mathbf v_1 & \cdots & \mathbf v_{j-1} & \mathbf e_i & \mathbf v_{j+1} & \cdots & \mathbf v_n \end{bmatrix} \\ &= \begin{bmatrix} \mathbf v_1^* \\ \mathbf v_2^* \\ \vdots \\ \mathbf v_n^* \end{bmatrix} \cdot \begin{bmatrix} \mathbf v_1 & \cdots & \mathbf v_{j-1} & \mathbf e_i & \mathbf v_{j+1} & \cdots & \mathbf v_n \end{bmatrix} \\ &= \begin{bmatrix} 1 & & & v_{i,1}^* & & & \\ & \ddots & & \vdots & & & \\ & & 1 & v_{i,j-1}^* & & & \\ & & & v_{i,j}^* & & & \\ & & & v_{i,j+1}^* & 1 & & \\ & & & \vdots & & \ddots & \\ & & & v_{i,n}^* & & & 1 \end{bmatrix} && \tag{16} \label{16} \\ \end{align}$ * Apply the matrix determination on both sides of equation $\eqref{16}$, we have $\begin{align} &\det \left( \mathbf V^* \right) \cdot \det \left( \begin{bmatrix} \mathbf v_1 & \cdots & \mathbf v_{j-1} & \mathbf e_i & \mathbf v_{j+1} & \cdots & \mathbf v_n \end{bmatrix} \right) \\ &= \det \left( \begin{bmatrix} 1 & & & v_{i,1}^* & & & \\ & \ddots & & \vdots & & & \\ & & 1 & v_{i,j-1}^* & & & \\ & & & v_{i,j}^* & & & \\ & & & v_{i,j+1}^* & 1 & & \\ & & & \vdots & & \ddots & \\ & & & v_{i,n}^* & & & 1 \end{bmatrix} \right) && \tag{17} \label{17} \\ \end{align}$ * Directly calculate the matrix determination, we have $\det \left( \begin{bmatrix} 1 & & & v_{i,1}^* & & & \\ & \ddots & & \vdots & & & \\ & & 1 & v_{i,j-1}^* & & & \\ & & & v_{i,j}^* & & & \\ & & & v_{i,j+1}^* & 1 & & \\ & & & \vdots & & \ddots & \\ & & & v_{i,n}^* & & & 1 \end{bmatrix} \right) = v_{i,j}^* \tag{18} \label{18}$ * Apply equations $\eqref{6}$ and $\eqref{18}$ to equation $\eqref{17}$, we have $\begin{align} &v_{i,j}^* = \det \left( \begin{bmatrix} \mathbf v_1 & \cdots & \mathbf v_{j-1} & \mathbf e_i & \mathbf v_{j+1} & \cdots & \mathbf v_n \end{bmatrix} \right) \\ &= \det \left( \begin{bmatrix} v_{1,1} & \cdots & v_{1,j-1} & 0 & v_{1,j+1} & \cdots & v_{1,n} \\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{i-1,1} & \cdots & v_{i-1,j-1} & 0 & v_{i-1,j+1} & \cdots & v_{i-1,n} \\ v_{i,1} & \cdots & v_{i,j-1} & 1 & v_{i,j+1} & \cdots & v_{i,n} \\ v_{i+1,1} & \cdots & v_{i+1,j-1} & 0 & v_{i+1,j+1} & \cdots & v_{i+1,n} \\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ v_{n,1} & \cdots & v_{n,j-1} & 0 & v_{n,j+1} & \cdots & v_{n,n} \end{bmatrix} \right) \\ &= \left( -1 \right)^{i+j} \det \left( \begin{bmatrix} v_{1,1} & \cdots & v_{1,j-1} & v_{1,j+1} & \cdots & v_{1,n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{i-1,1} & \cdots & v_{i-1,j-1} & v_{i-1,j+1} & \cdots & v_{i-1,n} \\ v_{i+1,1} & \cdots & v_{i+1,j-1} & v_{i+1,j+1} & \cdots & v_{i+1,n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{n,1} & \cdots & v_{n,j-1} & v_{n,j+1} & \cdots & v_{n,n} \end{bmatrix} \right) \\ &= \left( -1 \right)^{i+j} \det \left( \mathbf V_{ij} \right) && \tag{19} \\ \end{align}$ ## The Main Theory The following equality holds for each $i, j = 1, \cdots, n$ $\displaystyle \left \vert v_{i,j} \right \vert ^2 \cdot \prod_{k=1;k \neq j}^n \left( \lambda_k - \lambda_j \right) = \prod_{k=1}^{n-1} \left( \eta_{i,k} - \lambda_j \right) \tag{20}$ That is, $\left \vert v_{i,j} \right \vert$ can be fully expressed only by the eigenvalues of $\mathbf A$ and $\mathbf M_i$ without exploring individual elements $a_{ij}$ ### Proof * Apply equations $\eqref{13}$ and $\eqref{14}$, we have $\begin{align} &\mathbf M_i - \lambda_j \mathbf I_{n-1} = \mathbf U_i \mathbf \Lambda \mathbf U_i^* - \lambda_j \mathbf U_i \mathbf U_i^* = \mathbf U_i \left( \mathbf \Lambda - \lambda_i \mathbf I_n \right) \mathbf U_i^* \\ &= \begin{bmatrix} v_{1,1} & \cdots & v_{1,n} \\ \vdots & \ddots & \vdots \\ v_{i-1,1} & \cdots & v_{i-1,n} \\ v_{i+1,1} & \cdots & v_{i+1,n} \\ \vdots & \ddots & \vdots \\ v_{n,1} & \cdots & v_{n,n} \end{bmatrix} \begin{bmatrix} \lambda_1 - \lambda_j & & & & & & \\ & \ddots & & & & & \\ & & \lambda_{j-1} - \lambda_j & & & & \\ & & & 0 & & & \\ & & & & \lambda_{j+1} - \lambda_j & & \\ & & & & & \ddots & \\ & & & & & & \lambda_n - \lambda_j \end{bmatrix} \begin{bmatrix} v_{1,1}^* & \cdots & v_{i-1,1}^* & v_{i+1,1}^* & \cdots & v_{n,1}^* \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{1,n}^* & \cdots & v_{i-1,n}^* & v_{i+1,n}^* & \cdots & v_{n,n}^* \end{bmatrix} \\ &= \begin{bmatrix} v_{1,1} & \cdots & v_{1,j-1} & v_{1,j+1} & \cdots & v_{1,n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{i-1,1} & \cdots & v_{i-1,j-1} & v_{i-1,j+1} & \cdots & v_{i-1,n} \\ v_{i+1,1} & \cdots & v_{i+1,j-1} & v_{i+1,j+1} & \cdots & v_{i+1,n} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{n,1} & \cdots & v_{n,j-1} & v_{n,j+1} & \cdots & v_{n,n} \end{bmatrix} \begin{bmatrix} \lambda_1 - \lambda_j & & & & & \\ & \ddots & & & & \\ & & \lambda_{j-1} - \lambda_j & & & \\ & & & \lambda_{j+1} - \lambda_j & & \\ & & & & \ddots & \\ & & & & & \lambda_n - \lambda_j \end{bmatrix} \begin{bmatrix} v_{1,1}^* & \cdots & v_{i-1,1}^* & v_{i+1,1}^* & \cdots & v_{n,1}^* \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{1,j-1}^* & \cdots & v_{i-1,j-1}^* & v_{i+1,j-1}^* & \cdots & v_{n,j-1}^* \\ v_{1,j+1}^* & \cdots & v_{i-1,j+1}^* & v_{i+1,j+1}^* & \cdots & v_{n,j+1}^* \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ v_{1,n}^* & \cdots & v_{i-1,n}^* & v_{i+1,n}^* & \cdots & v_{n,n}^* \end{bmatrix} \\ &= \mathbf V_{ij} \cdot \begin{bmatrix} \lambda_1 - \lambda_j & & & & & \\ & \ddots & & & & \\ & & \lambda_{j-1} - \lambda_j & & & \\ & & & \lambda_{j+1} - \lambda_j & & \\ & & & & \ddots & \\ & & & & & \lambda_n - \lambda_j \end{bmatrix} \cdot \mathbf V_{ij}^* && \tag{21} \label{21} \\ \end{align}$ * From equation $\eqref{15}$, $\det \left( \mathbf V_{ij} \right) = \left( -1 \right)^{i+j} v_{ij}^*$, and apply the matrix determination on both sides of equation $\eqref{21}$, we have $\displaystyle \left( -1 \right)^{i+j} v_{i,j}^* \cdot \prod_{k=1;k \neq j}^n \left( \lambda_k - \lambda_j \right) \cdot \left( -1 \right)^{i+j} v_{i,j} = \det \left( \mathbf M_i - \lambda_j \mathbf I_{n-1} \right) = \prod_{k=1}^{n-1} \left( \eta_{i,k} - \lambda_j \right) \tag{22}$ $\displaystyle \left \vert v_{i,j} \right \vert ^2 \cdot \prod_{k=1;k \neq j}^n \left( \lambda_k - \lambda_j \right) = \prod_{k=1}^{n-1} \left( \eta_{i,k} - \lambda_j \right) \tag{23}$