---
title: svd_team_4
---
Members:
Steven White
Seth
Larry
**Section 1**
Example 1: Fill in the gaps
$\begin{bmatrix} 1&1&1&1&1&1\\1&1&1&1&1&1\\1&1&1&1&1&1\\1&1&1&1&1&1\\1&1&1&1&1&1\\1&1&1&1&1&1\\\end{bmatrix}=\begin{bmatrix}1\\1 \\1 \\1 \\1 \\1 \\ \end{bmatrix}[~1~~1~~1 ~~1~~1 ~~1 ~ ]$
Example 2: Fill in the gaps
$\begin{bmatrix} a&a&c&c&e&e\\a&a&c&c&e&e\\a&a&c&c&e&e\\a&a&c&c&e&e\\a&a&c&c&e&e\\a&a&c&c&e&e\\\end{bmatrix}=\begin{bmatrix}1\\1 \\1 \\1 \\1 \\1 \\ \end{bmatrix}[~a~~a~~c ~~c~~e ~~e ~ ]$
Example 3(a): Fill in the gaps
$\begin{bmatrix} 1&0\\1&1\\\end{bmatrix}=\begin{bmatrix}\\ \\ \end{bmatrix}[~\_~~\_~~ ]$
Example 3(b): Fill in the gaps
$\begin{bmatrix} 1&0\\1&1\\\end{bmatrix}=\begin{bmatrix}1\\1 \\ \end{bmatrix}[~1~~0~~ ]+\begin{bmatrix}0\\1 \\ \end{bmatrix}[~0~~1~~ ]$
Draw some conclusions.
Example 3(a): This 2x2 matrix cannot be factored into a 2x1 and 1x2 matrix product. The 2x2 matrix has rank 2. In general, suppose $A$ is an $m$ x $n$ matrix. Then $A$ will yield $A=CD$ via rank factorization, provided that $C$ is $m$ x $r$ and $D$ is $r$ x $n$, where $r$ is the rank of $A$. In our case, the factorization would require both $C$ and $D$ to be 2x2 matrices. Therefore, the factorization in Example 3(a) is impossible.
More explicitly, let
$$\begin{bmatrix}
1 & 0\\
1 & 1
\end{bmatrix}=
\begin{bmatrix}
a_1\\
a_2
\end{bmatrix}
\begin{bmatrix}
b_1 & b_2
\end{bmatrix}=
\begin{bmatrix}
a_1b_1 & a_1b_2\\
a_2b_1 & a_2b_2
\end{bmatrix},
\text{ } a_1,a_2,b_1,b_2 \in \mathbb{R}$$
Then,
$$a_1b_2=0 \implies a_1=0 \vee b_2 = 0 \implies 1=a_1b_1=0b_1=0 \vee 1=a_2b_2 = a_20 = 0$$
Either case results in a contradiction.
**Section 2**
1. Find rank one 2 by 2 matrix $\tilde A$ closest to $A$. Use Frobenius norm. To solve the arising system of equations use
https://www.wolframalpha.com/input/?i=solve+system+of+equations
Your work below should go a few questions down when it asks about decomposing in the form of eigenvalues and eigenvectors. This first problem utilizes the Frobenius Norm as follows:
Let $A=
\begin{bmatrix}
30 & 28\\
28 & 30
\end{bmatrix}$
We want to minimize the distance between $A$ and a rank one 2 by 2 matrix. Computations will be simplified by using the square of the norm. Then we want
$$
\min\limits_{a_1,a_2,b_1,b_2}\|A - \tilde A\|=
\min\limits_{a_1,a_2,b_1,b_2}\left\|A -
\begin{bmatrix}
a_1\\
a_2
\end{bmatrix}
\begin{bmatrix}
b_1 & b_2
\end{bmatrix}\right\|^2
=\min\limits_{a_1,a_2,b_1,b_2}\left\|
\begin{bmatrix}
30 & 28\\
28 & 30
\end{bmatrix}-
\begin{bmatrix}
a_1b_1 & a_1b_2\\
a_2b_1 & a_2b_2
\end{bmatrix}\right\|^2 =
\min\limits_{a_1,a_2,b_1,b_2}\left\|
\begin{bmatrix}
30 - a_1b_1 & 28 - a_1b_2\\
28 - a_2b_1 & 30 - a_2b_2
\end{bmatrix}\right\|^2
$$
Using the Frobenius Norm we get
$$
\min\limits_{a_1,a_2,b_1,b_2}|30-a_1b_1|^2+|28-a_1b_2|^2+|28-a_2b_1|^2 + |30-a_2b_2|^2=
\min\limits_{a_1,a_2,b_1,b_2}(30-a_1b_1)^2+(28-a_1b_2)^2+(28-a_2b_1)^2 + (30-a_2b_2)^2
$$
Then the problem is equivalent to minimizing $$\left\|
\begin{bmatrix}
30\\
28\\
28\\
30
\end{bmatrix} -
\begin{bmatrix}
a_1b_1\\
a_1b_2\\
a_2b_1\\
a_2b_2
\end{bmatrix}\right\|^2
$$
Note that $$
\begin{bmatrix}
a_1b_1\\
a_1b_2\\
a_2b_1\\
a_2b_2
\end{bmatrix} =
b_1
\begin{bmatrix}
a_1\\
0\\
a_2\\
0
\end{bmatrix}+
b_2
\begin{bmatrix}
0\\
a_1\\
0\\
a_2
\end{bmatrix}
$$
and let $H$ be the vector space spanned by $$\left\{
\begin{bmatrix}
a_1\\
0\\
a_2\\
0
\end{bmatrix} \text{ , }
\begin{bmatrix}
0\\
a_1\\
0\\
a_2
\end{bmatrix}
\right\}
$$
Then we want to minimize the square of the distance to the vector space $H$.
Let $$\mathbf{y}=
\begin{bmatrix}
30\\
28\\
28\\
30
\end{bmatrix} \text{ , }
\mathbf{u_1}=
\begin{bmatrix}
a_1\\
0\\
a_2\\
0
\end{bmatrix} \text{ , }
\mathbf{u_2}=
\begin{bmatrix}
0\\
a_1\\
0\\
a_2
\end{bmatrix}
$$
From previous homework, we know this is equivalent to maximizing $||\text{proj}_Hy||^2$. Then,
$$\text{proj}_Hy = \frac{\mathbf{y}\cdot \mathbf{u_1}}{\mathbf{u_1} \cdot \mathbf{u_1}}\mathbf{u_1} + \frac{\mathbf{y} \cdot \mathbf{u_2}}{\mathbf{u_2} \cdot \mathbf{u_2}}\mathbf{u_2} =
\frac{30a_1 + 28a_2}{a_1^2 + a_2^2}\mathbf{u_1} + \frac{28a_1 + 30a_2}{a_1^2 + a_2^2}\mathbf{u_2}
$$
Then,
$$
||\text{proj}_Hy||^2= \frac{(30a_1 + 28a_2)^2a_1^2 + (30a_1 + 28a_2)^2a_2^2 + (28a_1 + 30a_2)^2a_1^2 + (28a_1 + 30a_2)^2a_2^2}{(a_1^2 + a_2^2)^2} =
\frac{(30a_1 + 28a_2)^2 + (28a_1 + 30a_2)^2}{a_1^2 + a_2^2} =
900 + 784 + \frac{a_1a_23360}{a_1^2 + a_2^2}
$$
Let $h(a_1,a_2) = a_1^2 + a_2^2 - c = 0$ and let $f(a_1,a_2)=a_1a_2$. Let $g(a_1,a_2,\lambda)=f(a_1,a_2) - \lambda h(a_1,a_2)$
Then,
$$\begin{align}
\frac{dg}{da_1} = a_2 - 2\lambda a_1 = 0\\
\frac{dg}{da_2} = a_1 - 2\lambda a_2 = 0\\
\frac{dg}{d\lambda} = a_1^2 + a_2^2 = c
\end{align}
$$
implies
$$
a_1=a_2
$$
Also,
$$
\begin{bmatrix}
a_1b_1\\
a_1b_2\\
a_2b_1\\
a_2b_2
\end{bmatrix} =
a_1
\begin{bmatrix}
b_1\\
b_2\\
0\\
0
\end{bmatrix}+
a_2
\begin{bmatrix}
0\\
0\\
b_1\\
b_2
\end{bmatrix} =
a_1
\begin{bmatrix}
b_1\\
b_2\\
b_1\\
b_2
\end{bmatrix}
$$
and let $K$ be the vector space spanned by $$\left\{
\begin{bmatrix}
b_1\\
b_2\\
b_1\\
b_2
\end{bmatrix}
\right\}
$$
Then we want to minimize the square of the distance to the vector space $K$.
Let $$\mathbf{y}=
\begin{bmatrix}
30\\
28\\
28\\
30
\end{bmatrix} \text{ , }
\mathbf{v}=
\begin{bmatrix}
b_1\\
b_2\\
b_1\\
b_2
\end{bmatrix}
$$
From previous homework, we know this is equivalent to maximizing $||\text{proj}_Ky||^2$. Then,
$$\text{proj}_Ky = \frac{\mathbf{y}\cdot \mathbf{v}}{\mathbf{v} \cdot \mathbf{v}}\mathbf{v} =
\frac{29b_1 + 29b_2}{b_1^2 + b_2^2}\mathbf{v}
$$
Then,
$$
||\text{proj}_Ky||^2= \frac{(29b_1 + 29b_2)^2b_1^2 + (29b_1 + 29b_2)^2b_2^2 + (29b_1 + 29b_2)^2b_1^2 + (29b_1 + 29b_2)^2b_2^2}{(b_1^2 + b_2^2)^2} =
\frac{2(29b_1 + 29b_2)^2}{b_1^2 + b_2^2}=
3364b_1b_2
$$
Let $h(b_1,b_2) = b_1^2 + b_2^2 - c = 0$ and let $f(b_1,b_2)=b_1b_2$. Let $g(b_1,b_2,\lambda)=f(b_1,b_2) - \lambda h(b_1,b_2)$
Then,
$$\begin{align}
\frac{dg}{db_1} = b_2 - 2\lambda b_1 = 0\\
\frac{dg}{db_2} = b_1 - 2\lambda b_2 = 0\\
\frac{dg}{d\lambda} = b_1^2 + b_2^2 = c
\end{align}
$$
implies
$$
b_1=b_2
$$
Then,
$$
\begin{bmatrix}
a_1b_1\\
a_1b_2\\
a_2b_1\\
a_2b_2
\end{bmatrix} =
\begin{bmatrix}
a_1b_1\\
a_1b_1\\
a_1b_1\\
a_1b_1
\end{bmatrix}= a_1b_1
\begin{bmatrix}
1\\
1\\
1\\
1
\end{bmatrix}
$$
So ultimately we want to minimize the distance between $\mathbf{y}$ and the vector space spanned by
$$\mathbf{w}=
\begin{bmatrix}
1\\
1\\
1\\
1
\end{bmatrix}
$$
Then,
$$
\text{proj}_w\mathbf{y}=\frac{\mathbf{y}\cdot \mathbf{w}}{\mathbf{w} \cdot \mathbf{w}}\mathbf{w}=
\frac{116}{4}\mathbf{w}=
\begin{bmatrix}
29\\
29\\
29\\
29
\end{bmatrix}
$$
Then,
$$\tilde A =
\begin{bmatrix}
29 & 29\\
29 & 29
\end{bmatrix}
$$
3. Then compute $E=A-\tilde A$, and write $A$ as a sum of $\tilde A$ and $E$.
$$
A - \tilde A=\begin{bmatrix} 30&28\\28&30\\\end{bmatrix} - \begin{bmatrix} 29&29\\29&29\\\end{bmatrix} = \begin{bmatrix} 1&-1\\-1&1\\\end{bmatrix} = E
$$
$$
A = \begin{bmatrix} 30&28\\28&30\\\end{bmatrix} = \begin{bmatrix} 29&29\\29&29\\\end{bmatrix} + \begin{bmatrix} 1&-1\\-1&1\\\end{bmatrix} = \tilde A + E
$$
2. How are $\tilde A$ and $E$ are related to the eigenvalues and eigenvectors of $A$? Try to compute $\tilde A$ using eigenvalues and eigenvectors. This is a famous kind of decomposition, called how?
Can this decomposition be obtained for any matrix?
E.g. Example 3.
$$
\sigma_{1} \mathbf{u_{1}} \mathbf{v_{1}^{T}} = 58\begin{bmatrix}
1/\sqrt{2} \\
1/\sqrt{2}
\end{bmatrix}
\begin{bmatrix}
1/\sqrt{2} & 1/\sqrt{2}
\end{bmatrix} = \begin{bmatrix}
29 & 29 \\
29 & 29
\end{bmatrix} = \tilde A
$$
Singular value decomposition of $\tilde A$ produces
$$
\tilde A = \begin{bmatrix}
1/\sqrt{2} & -1/\sqrt{2} \\
1/\sqrt{2} & 1/\sqrt{2}
\end{bmatrix} \begin{bmatrix}
58 & 0 \\
0 & 0
\end{bmatrix} \begin{bmatrix}
1/\sqrt{2} & -1/\sqrt{2} \\
1/\sqrt{2} & 1/\sqrt{2}
\end{bmatrix}
$$
The outer matrices are composed of the unit eigenvectors of $\tilde A$, which are the same as $A$. The inner, diagonal matrix is composed of the eigenvalues of $\tilde A$.
Singular value decomposition of $E$ produces
$$
E = \begin{bmatrix}
-1/\sqrt{2} & 1/\sqrt{2} \\
1/\sqrt{2} & 1/\sqrt{2}
\end{bmatrix} \begin{bmatrix}
2 & 0 \\
0 & 0
\end{bmatrix} \begin{bmatrix}
-1/\sqrt{2} & 1/\sqrt{2} \\
1/\sqrt{2} & 1/\sqrt{2}
\end{bmatrix}
$$
Further,
$$
E = \begin{bmatrix} 1&-1\\
-1&1
\end{bmatrix} = \begin{bmatrix} 1 \\
-1
\end{bmatrix} \begin{bmatrix} 1 & -1 \\
\end{bmatrix}
$$
We have that the unit eigenvectors that compose the outer matrices are reversed in $E$, while the inner, diagonal matrix makes use of the second, smaller eigenvalue of $A$.
**Section 3**
Use Wolphram Alpha for all computations
1. Find rank one 2 by 2 matrix $\tilde A$ closest to $A$. Use Frobenius norm.
$A=\begin{bmatrix} 1&0\\1&1\\\end{bmatrix}$
$\|A\|_\text{F} = \sqrt{\sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^2} = \sqrt{3}$
$$
\tilde A =
\begin{bmatrix} \frac{(\sqrt{5}+5)\sqrt{5-2\sqrt{5}}}{10} & \frac{(\sqrt{5}-5)\sqrt{2\sqrt{5}+5}}{10} \\ \frac{(3\sqrt{5}+5)\sqrt{5-2\sqrt{5}}}{10} & \frac{\sqrt{2\sqrt{5}+5}(3\sqrt{5}-5)}{10} \end{bmatrix}
\begin{bmatrix} \sqrt{3} & 0 \\ 0 & 0 \end{bmatrix}
\begin{bmatrix} \frac{\sqrt{5}+5\sqrt{-2(\sqrt5-5)}}{20} & \frac{\sqrt{5}-5\sqrt{2(\sqrt5+5)}}{20} \\ \frac{\sqrt{-10(\sqrt{5}-5)}}{10} & \frac{\sqrt{10(\sqrt{5}+5)}}{10} \end{bmatrix}^T =
\begin{bmatrix} \frac{\sqrt{15}}{5} & \frac{5\sqrt{3}-\sqrt{15}}{10} \\ \frac{(\sqrt{5}+5)\sqrt{3}}{10} & \frac{\sqrt{15}}{5} \end{bmatrix}
$$
2. Can you compute $\tilde A$ and $E$ using eigenvectors and eigenvalues of $A$?
2. Compute the eigenvalues and the unit eigenvectors of $AA^T$ and $A^TA$. What do you notice about the eigenvalues of $A^TA$ and $AA^T$?
The eigenvalues for $A^TA$ and $AA^T$ are:
$$\lambda_1 = \frac{3+\sqrt5}{2}, \lambda_2 = \frac{3-\sqrt5}{2}$$
$A^TA$ and $AA^T$ have the same eigenvalues.
The unit eigenvectors for $A^TA$ are:
$$v_1 = \begin{bmatrix}\frac{1+\sqrt{5}}{2\sqrt{1+\frac{1}{4}(1+\sqrt{5})^2}} \\ \frac{1}{\sqrt{1+\frac{1}{4}(1+\sqrt{5})^2}}\end{bmatrix}, v_2 = \begin{bmatrix}\frac{1-\sqrt{5}}{2\sqrt{1+\frac{1}{4}(1-\sqrt{5})^2}} \\ \frac{1}{\sqrt{1+\frac{1}{4}(1-\sqrt{5})^2}}\end{bmatrix}$$
The unit eigenvectors for $AA^T$ are:
$$v_1 = \begin{bmatrix}\frac{-1+\sqrt{5}}{2\sqrt{1+\frac{1}{4}(-1+\sqrt{5})^2}} \\ \frac{1}{\sqrt{1+\frac{1}{4}(-1+\sqrt{5})^2}}\end{bmatrix}, v_2 = \begin{bmatrix}\frac{-1-\sqrt{5}}{2\sqrt{1+\frac{1}{4}(-1-\sqrt{5})^2}} \\ \frac{1}{\sqrt{1+\frac{1}{4}(-1-\sqrt{5})^2}}\end{bmatrix}$$
3. Let $\lambda_1$ be the larget eigenvalue, and let ${\bf u_1}$ be the unit eigenvector of $AA^T$ corresponding to $\lambda_1$, let ${\bf v_1}$ be the unit eigenvector of $A^TA$ corresponding to $\lambda_1$. Let $\sigma_1$ be the square root of $\lambda_1$. Compute
$$A_1=\sigma_1{\bf u_1}{\bf v_1}^T$$
$$= \frac{\sqrt{5}+1}{2}\begin{bmatrix}\frac{-1+\sqrt{5}}{2\sqrt{1+\frac{1}{4}(-1+\sqrt{5})^2}} \\ \frac{1}{\sqrt{1+\frac{1}{4}(-1+\sqrt{5})^2}}\end{bmatrix}
\begin{bmatrix}\frac{1+\sqrt{5}}{2\sqrt{1+\frac{1}{4}(1+\sqrt{5})^2}} & \frac{1}{\sqrt{1+\frac{1}{4}(1+\sqrt{5})^2}}\end{bmatrix} = \begin{bmatrix}
\frac{\sqrt{5}+5}{10} & \frac{\sqrt{5}}{5}\\ \frac{3\sqrt{5}+5}{10} & \frac{\sqrt{5}+5}{10}
\end{bmatrix}
$$
4. Compare $\tilde A$ and $A_1$
5. Prove that for any non-zero vectors ${\bf u}$ and ${\bf v}$, the matrix ${\bf u}{\bf v}^T$ has rank 1.
Note that
$$\text{Col}(\mathbf{u}\mathbf{v}^T) = \text{span}\{\mathbf{u}\} \wedge \mathbf{u} \neq \mathbf{0} \implies \mathbf{u} \text{ is a basis for } \text{Col}(\mathbf{u}\mathbf{v}^T) \implies \text{Col}(\mathbf{u}\mathbf{v}^T) \text{ has dim}=1 \Leftrightarrow \mathbf{u}\mathbf{v}^T \text{ has rank 1} $$
$Proof.$ Let $\mathbf{v},\mathbf{u} \in \mathbb{R}^n$ and let $\mathbf{x} \in \text{Col}(\mathbf{u}\mathbf{v}^T)$. Then, $\mathbf{x}= c_1v_1\mathbf{u} + c_2v_2\mathbf{u} + \cdots + c_nv_n\mathbf{u} = (c_1v_1 + c_2v_2 + \cdots + c_nv_n)\mathbf{u}\text{, } c_i \in \mathbb{R} \text{ for } 1 \leq i \leq n$. Then $\mathbf{x} \in \text{span}\{\mathbf{u}\}$. Then $\text{Col}(\mathbf{u}\mathbf{v}^T) \subseteq \text{span}\{\mathbf{u}\}$.
Let $\mathbf{x} \in \text{span}\{\mathbf{u}\}$ and $\mathbf{x} \neq \mathbf{0}$. Then, $\mathbf{x}=\alpha\mathbf{u} \text{, } \alpha \in \mathbb{R}\setminus\{0\}$. Then, $\mathbf{u}=\frac{\mathbf{x}}{\alpha}$. Now, let $\mathbf{b} \in \text{Col}(\mathbf{u}\mathbf{v}^T).$ Then, $\mathbf{b} = \frac{(c_1v_1 + c_2v_2 + \cdots + c_nv_n)}{\alpha}\mathbf{x} \text{, } c_i \in \mathbb{R} \text{ for } 1 \leq i \leq n$. Since $\mathbb{R}$ is a field, let $c_i=0$ whenever $v_i=0$ and let $c_i=\alpha v_i^{-1}$ whenever $v_i \neq 0$ for $1 \leq i \leq n$ (i.e., these weights exist in $\mathbb{R}$ for any $\alpha$). Then, $\mathbf{x}=\mathbf{b} \in \text{Col}(\mathbf{u}\mathbf{v}^T)$. Then, $\text{span}\{\mathbf{u}\} \subseteq\text{Col}(\mathbf{u}\mathbf{v}^T).$ Then, $\text{Col}(\mathbf{u}\mathbf{v}^T) = \text{span}\{\mathbf{u}\}$. By the argument preceding the proof, for any non-zero vectors $\mathbf{u}$ and $\mathbf{v}$, the matrix $\mathbf{u}\mathbf{v}^T$ has rank 1. $\square$
6. Suppose you have an $m\times n$ matrix $M$. Suggest a linear algebra procedure for obtaining the matrix $M_1$, which is the closest rank 1 matrix to $M$ in Frobenius norm.