---
title: Team 4
---
Team:
Bryten
Seth
Larry
***Score 27/30***
The four points are $v_1=(1,2), v_2=(2,1), v_3=(3,4), v_4=(4,3)$. They are in 2D. We want to reduce dimensionality to one. The result should be four points in $\mathbb {R}^1$: $p_1$, $p_2$, $p_3$, $p_4$.
**Predict the answer**
1. By visual inspection find a hyperplane $H$ that the four points are "closest to."
Answer: The line $x=y$ forms a hyperplane that bisects through the four points.
2. Find projections of the four points on $H$.
Answer: Projections of $v$ onto $y=x$ are calculated using the unit vector \begin{bmatrix}
1\\
1\\
\end{bmatrix}
***This is not a unit vector***
where the projections are calculated using the formula $\frac{v \cdot u}{u \cdot u}$$\cdot u$
So, we have: for $v_1$=
$\frac{\begin{bmatrix} 1\\2\\ \end{bmatrix} \cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}}{\begin{bmatrix} 1\\ 1\\ \end{bmatrix} \cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}}$$\cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}$= $(3/2)\begin{bmatrix} 1\\ 1\\\end{bmatrix}$= $\begin{bmatrix} 3/2\\ 3/2\\ \end{bmatrix}$
for $v_2$=
$\frac{\begin{bmatrix} 2\\1\\ \end{bmatrix} \cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}}{\begin{bmatrix} 1\\ 1\\ \end{bmatrix} \cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}}$$\cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}$= $(3/2)\begin{bmatrix} 1\\ 1\\\end{bmatrix}$= $\begin{bmatrix} 3/2\\ 3/2\\ \end{bmatrix}$
for $v_3$=
$\frac{\begin{bmatrix} 3\\4\\ \end{bmatrix} \cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}}{\begin{bmatrix} 1\\ 1\\ \end{bmatrix} \cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}}$$\cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}$= $(7/2)\begin{bmatrix} 1\\ 1\\\end{bmatrix}$= $\begin{bmatrix} 7/2\\ 7/2\\ \end{bmatrix}$
for $v_4$=
$\frac{\begin{bmatrix} 4\\3\\ \end{bmatrix} \cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}}{\begin{bmatrix} 1\\ 1\\ \end{bmatrix} \cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}}$$\cdot \begin{bmatrix} 1\\ 1\\ \end{bmatrix}$= $(7/2)\begin{bmatrix} 1\\ 1\\\end{bmatrix}$= $\begin{bmatrix} 7/2\\ 7/2\\ \end{bmatrix}$
So, we have:
$(3/2,3/2)$ and $(7/2,7/2)$
3. Find $p_1$, $p_2$, $p_3$, $p_4$.
Answer:
$p_1= p_2=$ $\sqrt{(3/2)^2 + (3/2)^2}$ $= \sqrt{18/4}$
$p_3= p_4=$ $\sqrt{(7/2)^2+(7/2)^2}$ $= \sqrt{98/4}$
**Mathemitize your approach**
1. $H$ is uniquely defined by a vector ${\bf x}:=(x_1,x_2)$ that we need to find. Let us assume that ${\bf x}$ has unit length. Set up a minimization problem that uses $v_1,v_2,v_3,v_4$ as knowns and ${\bf x}$ as an unknown. This should reflect the fact that $H$ is the hyperplane that the four points are "closest to." You should be able to recognize the mathematical objects that we have studied in this course. Name them. You will end up with
$$\min_{x_1,x_2} F(x_1, x_2)$$
subject to the constraint $x_1^2+x_2^2=1$
Answer: Need to take the minimum of the distances of the projections.
$min\sum_{i=1}^{4} (v_i-(v_i\cdot x)x)(v_i-(v_i\cdot x)x)$
We know that $x\cdot x = 1$
Then, we have:
$min\sum_{i=1}^{4} (v_i\cdot v_i-2(v_i\cdot x)(v_i\cdot x) + {(v_i\cdot x)}^2(x\cdot x))$ = $min\sum_{i=1}^{4} (v_i\cdot v_i-2{(v_i\cdot x)}^2+{(v_i\cdot x)}^2)$= $$min\sum_{i=1}^{4} (v_i\cdot v_i-{(v_i\cdot x)}^2)$$
2. Expand $F(x_1, x_2)$ and rewrite your minimization problem as a maximization one
$$\max_{x_1,x_2} G(x_1, x_2)$$
subject to the constraint $x_1^2+x_2^2=1$
Answer:
Let $(v_i\cdot x)x= v_i-y_i$
Maximize $\sum_{i=1}^{4}{||proj_xv_i||}^2$
$$\max_{x_1,x_2} G(x_1, x_2)$$= $\max \sum_{i=1}^{4}{(v_i\cdot x)}^2$
***Why is maximizing this function is equivalent to minimizing the function F?***
3. Recognize $G(x_1, x_2)$ as an object that we have studied in this course. You might want to explicitly write out $G(x_1, x_2)$
Answer: Maximum is reached at the summation of the eigenvectors. The object is a quadratic form.
***Correct answer: quadratic form***
4. Write $G(x_1, x_2)$ in a vector-matrix form using ${\bf x}$ and the matrix $M$ that has $v_i$, $i=1,2,3,4$ as its rows.
$$ \sum_{i=1}^{4}(v_i \cdot x)^2 = x^TAx,\text{ where A is }M^TM \\
=\begin{bmatrix}x_1 & x_2\end{bmatrix}\begin{bmatrix}30 & 28 \\28 & 30\end{bmatrix} \begin{bmatrix}x_1 \\ x_2\end{bmatrix}\\= x_1(30x_1 + 28x_2) + x_2(28x_1 + 30x_2)\\= 30x_1^2 + 56x_1x_2 + 30x_2^2$$
5. Write the constraint $x_1^2+x_2^2=1$ as a product of two vectors.
$$ x^Tx -1 = 0 $$
6. Use Lagrange multiplyers to solve the maximization problem. Google how to differentiate $G(x_1, x_2)$.
$$ x^TAx - \lambda(x^Tx - 1) $$
$$ \frac{\partial G}{dx_1} = 0,\frac{dx_2}{dx_2} = 0,\frac{d}{d\lambda} = 0, $$
$$ 2x^TA - 2 \lambda x^T = 0 \Rightarrow x^TA = \lambda x^T \Rightarrow Ax = \lambda x $$
7. Recognize your solutions as a mathematical object heavily studied in this course.
The solution is the largest eigenvalue which corresponds to the $e_1$ eigenvector of A.
8. Your solution will produce the desired ${\bf x}$. Find a simple matrix multiplication way of obtaining $p_1$, $p_2$, $p_3$, $p_4$.
Multiplying $M$ by the unit eigenvector corresponding to the principle eigenvalue will yield the desired points in $\mathbb {R}^1$. Therefore, let $\mathbf{w}$ be this unit eigenvector. Then $M \mathbf{w}$ will yield the points. In the Toy Case, we have
$$
M \mathbf{w} = \begin{bmatrix}
1 & 2\\
2 & 1\\
3 & 4\\
4 & 3
\end{bmatrix} \begin{bmatrix}
1/\sqrt{2} \\
1/\sqrt{2}
\end{bmatrix} = \begin{bmatrix}
3/\sqrt{2}\\
3/\sqrt{2}\\
7/\sqrt{2}\\
7/\sqrt{2}
\end{bmatrix}
$$
Finally
$$ p_1 = \frac{3}{\sqrt{2}}, p_2 = \frac{3}{\sqrt{2}}, p_3 = \frac{7}{\sqrt{2}}, p_4 = \frac{7}{\sqrt{2}}$$
**Code - Toy Case:**
```python=
import numpy as np
from numpy import linalg as LA
# TOY CASE
# Matrix M
M = np.array([[1, 2],
[2, 1],
[3, 4],
[4, 3]])
# Matrix A (M transpose times M)
A = M.T @ M
# Eigenvalues and Eigenvectors of A
eigVl, eigVc = LA.eig(A)
# w
w = eigVc[:,0]
w = np.reshape(w, (-1,1))
# M times w
Mw = M @ w
Mw = np.reshape(Mw, (-1,1))
# Print results
print('TOY CASE: \n')
print('M =', '\n\n', M, '\n')
print('A =', '\n\n', A, '\n')
print('Eigenvalues =', '\n\n', eigVl, '\n')
print('Eigenvectors =', '\n\n', eigVc, '\n')
print('w =', '\n\n', w, '\n')
print('Mw (M Dimensionally Reduced) =', '\n\n', Mw, '\n')
```
**Output - Toy Case:**
```
TOY CASE:
M =
[[1 2]
[2 1]
[3 4]
[4 3]]
A =
[[30 28]
[28 30]]
Eigenvalues =
[58. 2.]
Eigenvectors =
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]]
w =
[[0.70710678]
[0.70710678]]
Mw (M Dimensionally Reduced) =
[[2.12132034]
[2.12132034]
[4.94974747]
[4.94974747]]
```
The rotation (in red: $M \mathbf{w}$) is obtained by the transformation described in $#8$ above.
**Code - Toy Case Rotation:**
```python=
import matplotlib.pyplot as plt
#Execute the transformation, Me, and plot both sets of data
#Variables and functions used from the code above
rotated_data = np.matmul(M, eigVc)
plt.scatter(rotated_data[:,0], rotated_data[:,1], color = 'red')
plt.scatter(M[:,0],M[:,1], color = 'blue')
plt.show()
```

---
## New Set of Data
**1.**
$\min_{x_1,x_2} F(x_1, x_2)$ = $min\sum_{i=1}^{10} (v_i-(v_i\cdot x)x)(v_i-(v_i\cdot x)x)$.
We know that $x\cdot x = 1$
Then, we have:
$min\sum_{i=1}^{10} (v_i\cdot v_i-2(v_i\cdot x)(v_i\cdot x) + {(v_i\cdot x)}^2(x\cdot x))$ = $min\sum_{i=1}^{10} (v_i\cdot v_i-2{(v_i\cdot x)}^2+{(v_i\cdot x)}^2)$= $$min\sum_{i=1}^{10} (v_i\cdot v_i-{(v_i\cdot x)}^2)$$
**2.**
$\max_{x_1,x_2} G(x_1, x_2)$= $\max \sum_{i=1}^{10}{(v_i\cdot x)}^2$
**3.**
The object is a quadratic form.
**4.**
$\sum_{i=1}^{10}(v_i \cdot x)^2 = x^TAx, \text{ where A is }M^TM$
$=\begin{bmatrix}x_1 & x_2\end{bmatrix} \begin{bmatrix}517.44 & -16.96\\-16.96 & 4.69\end{bmatrix} \begin{bmatrix}x_1 \\ x_2\end{bmatrix}$
$= x_1(517.44x_1 - 16.96x_2) + x_2(-16.96x_1 + 4.69x_2)$
$= 517.44x_1^2 - 33.92x_1x_2 + 4.69x_2^2$
**5.**
$x^Tx -1 = 0$
**6.**
$x^TAx - \lambda(x^Tx - 1)$
$\frac{\partial G}{dx_1} = 0,\frac{dx_2}{dx_2} = 0,\frac{d}{d\lambda} = 0$
$2x^TA - 2 \lambda x^T = 0 \Rightarrow x^TA = \lambda x^T \Rightarrow Ax = \lambda x$
**7.**
The solution is the largest eigenvalue which corresponds to the $e_1$ eigenvector of A.
**8.**
Multiplying $M$ by the unit eigenvector corresponding to the principle eigenvalue will yield the desired points in $\mathbb {R}^1$. Therefore, let $\mathbf{w}$ be this unit eigenvector. Then $M \mathbf{w}$ will yield the points. The code output below yields the dimensionality reduction:
$$
Mw = \begin{bmatrix}
4.1208796 \\
-5.68038007 \\
1.36951628 \\
-13.39599403\\
-7.60906401 \\
-8.22194575 \\
5.19055949 \\
-0.11975891 \\
-11.61678918 \\
-0.13957236
\end{bmatrix}
$$
**Code - New Set of Data:**
```python=
import numpy as np
from numpy import linalg as LA
# NEW SET OF DATA
# Matrix M
M = np.array([[4.1, -0.7],
[-5.7, -0.5],
[1.4, 0.9],
[-13.4, 0.1],
[-7.6, 0.4],
[-8.2, 0.8],
[5.2, 0.2],
[-0.1, 0.6],
[-11.6, 0.7],
[-0.1, 1.2]])
# Matrix A (M transpose times M)
A = M.T @ M
# Eigenvalues and Eigenvectors of A
eigVl, eigVc = LA.eig(A)
# w
w = eigVc[:,0]
w = np.reshape(w, (-1,1))
# M times w
Mw = M @ w
Mw = np.reshape(Mw, (-1,1))
# Print results
print('NEW SET OF DATA: \n')
print('M =', '\n\n', M, '\n')
print('A =', '\n\n', A, '\n')
print('Eigenvalues =', '\n\n', eigVl, '\n')
print('Eigenvectors =', '\n\n', eigVc, '\n')
print('w =', '\n\n', w, '\n')
print('Mw (M Dimensionally Reduced) =', '\n\n', Mw, '\n')
```
**Output - New Set of Data:**
```
NEW SET OF DATA:
M =
[[ 4.1 -0.7]
[ -5.7 -0.5]
[ 1.4 0.9]
[-13.4 0.1]
[ -7.6 0.4]
[ -8.2 0.8]
[ 5.2 0.2]
[ -0.1 0.6]
[-11.6 0.7]
[ -0.1 1.2]]
A =
[[517.44 -16.96]
[-16.96 4.69]]
Eigenvalues =
[518.00036585 4.12963415]
Eigenvectors =
[[ 0.99945461 0.03302242]
[-0.03302242 0.99945461]]
w =
[[ 0.99945461]
[-0.03302242]]
Mw (M Dimensionally Reduced) =
[[ 4.1208796 ]
[ -5.68038007]
[ 1.36951628]
[-13.39599403]
[ -7.60906401]
[ -8.22194575]
[ 5.19055949]
[ -0.11975891]
[-11.61678918]
[ -0.13957236]]
```
**Rotation - data_3.pdf:**
The rotation shown in the data_3.pdf file is accomplished by multiplying the matrix $M$ by
$$ R = \begin{bmatrix}
1 & 0\\
0 & 0
\end{bmatrix}
$$
This transformation preserves the first column of the resulting matrix, and sets the second column to $0$, and we have
$$
MR = \begin{bmatrix}
4.1 & 0\\
-5.7 & 0\\
1.4 & 0\\
-13.4 & 0\\
-7.6 & 0\\
-8.2 & 0\\
5.2 & 0\\
-0.1 & 0\\
-11.6 & 0\\
-0.1 & 0
\end{bmatrix}
$$
**Code - data_3.pdf rotation:**
```python=
import math
M = np.array([[4.1, -0.7],
[-5.7, -0.5],
[1.4, 0.9],
[-13.4, 0.1],
[-7.6, 0.4],
[-8.2, 0.8],
[5.2, 0.2],
[-0.1, 0.6],
[-11.6, 0.7],
[-0.1, 1.2]])
rot_flat = np.array([[1, 0],[0,0]])
M_flat = np.matmul(M, rot_flat)
plt.scatter(M[:,0],M[:,1], color = 'blue')
plt.scatter(M_flat[:,0], M_flat[:,1], color = 'red')
plt.xlim(-15, 10)
plt.ylim(-2, 2)
plt.show()
```

**Rotation: data_1.pdf and data_2.pdf**
The set of points described by $M$ is rotated counter clockwise by 60 degrees, or $\pi/3$ radians. The matrix performing this transformation is
$$
R = \begin{bmatrix}
cos(\pi/3) & sin(\pi/3) \\
-sin(\pi/3) & cos(\pi/3)
\end{bmatrix}
$$
Therefore, $MR$ yields the rotation we're looking for.
**Code - data_1.pdf and data_2.pdf rotation:**
```python=
rot_third = np.array([[math.cos(math.pi/3), math.sin(math.pi/3)],[-math.sin(math.pi/3), math.cos(math.pi/3) ]])
M_third = np.matmul(M, rot_third)
plt.scatter(M_third[:,0],M_third[:,1], color = 'red')
plt.scatter(M[:,0], M[:,1], color = 'blue')
plt.xlim(-15, 10)
plt.ylim(-15, 10)
plt.show()
```
