---
title: Team 1
---
***Score: 27/30***
The four points are $v_1=(1,2), v_2=(2,1), v_3=(3,4), v_4=(4,3)$. They are in 2D. We want to reduce dimensionality to one. The result should be four points in $\mathbb {R}^1$: $p_1$, $p_2$, $p_3$, $p_4$.
**Predict the answer**
1. By visual inspection find a hyperplane $H$ that the four points are "closest to."
y=x
2. Find projections of the four points on $H$.
$proj v_1$ = $proj v_2$ = (1.5,1.5) by midpoint formula
$proj v_3$ = $proj v_4$ = (3.5,3.5) by midpoint formula
3. Find $p_1$, $p_2$, $p_3$, $p_4$.
$p_1 = p_2= 1.5 * \sqrt {2}$ by pythagorean theorem
***Pythagorean***
$p_3 = p_4 = 3.5 * \sqrt{2}$ by pythagorean theorem
**Mathemitize your approach**
1. $H$ is uniquely defined by a vector ${\bf x}:=(x_1,x_2)$ that we need to find. Let us assume that ${\bf x}$ has unit length. Set up a minimization problem that uses $v_1,v_2,v_3,v_4$ as knowns and ${\bf x}$ as an unknown. This should reflect the fact that $H$ is the hyperplane that the four points are "closest to." You should be able to recognize the mathematical objects that we have studied in this course. Name them. You will end up with
$$\min_{x_1,x_2} F(x_1, x_2)$$
subject to the constraint $x_1^2+x_2^2=1$
$$\sum_{i=1}^{4} ||v_i- (v_i\cdot {\bf x}) {\bf x}|| ^2$$
2. Expand $F(x_1, x_2)$ and rewrite your minimization problem as a maximization one
$$\max_{x_1,x_2} G(x_1, x_2)$$
subject to the constraint $x_1^2+x_2^2=1$
$$\max\sum_{i=1}^{4} (v_i \cdot x) $$
3. Recognize $G(x_1, x_2)$ as an object that we have studied in this course. You might want to explicitly write out $G(x_1, x_2)$
We want to maximize the projection of $v_i$ onto $x$.
***why? how is this related to minimizing the function F?***
4. Write $G(x_1, x_2)$ in a vector-matrix form using ${\bf x}$ and the matrix $M$ that has $v_i$, $i=1,2,3,4$ as its rows.
$$\sum_{i=1}^{4} (v_i \cdot x)^2= x^TAx,$$ where A is $$M^TM$$
5. Write the constraint $x_1^2+x_2^2=1$ as a product of two vectors.
$$x^Tx-1=0 $$
6. Use Lagrange multiplyers to solve the maximization problem. Google how to differentiate $G(x_1, x_2)$.
$$x^TAx -\lambda(x^Tx-1) =0 $$
$$ 2x^T A - 2\lambda x^T =0 $$
$$ x^T A = \lambda x^T $$
$$ x^T A = \lambda x^T $$
$$ Ax = \lambda x $$
7. Recognize your solutions as a mathematical object heavily studied in this course.
EIGENVECTOR of A!
8. Your solution will produce the desired ${\bf x}$. Find a simple matrix multiplication way of obtaining $p_1$, $p_2$, $p_3$, $p_4$.
$x=$ the eigenvector that corresponds to the largest eigenvalue.
$p_1=v_1x$
$p_2=v_2x$
$p_3=v_3x$
$p_4=v_4x$
Case #1: $v_1=(1,2), v_2=(2,1), v_3=(3,4), v_4=(4,3)$.
$$ Ax = \lambda x $$
$$ A= \begin{bmatrix} 1&2&3&4 \\ 2&1&4&3 \end {bmatrix} \begin {bmatrix} 1 & 2 \\ 2 & 1\\3 & 4\\4 & 3 \end{bmatrix} = \begin{bmatrix} 30&28\\28&30 \end {bmatrix}$$
$\lambda =2, 58$; largest eigenvalue is 58. Corresponding eigenvector is $\begin{bmatrix} 1\\ 1 \end {bmatrix}$.
Since we want ou $x$ to be a unit vector, we get $x=\begin{bmatrix} \sqrt {2}/2 \\ \sqrt {2}/2 \end {bmatrix}$
$p_1 =\begin{bmatrix} 1&2 \end {bmatrix} \begin{bmatrix} \sqrt {2}/2 \\ \sqrt {2}/2 \end {bmatrix}= 3\sqrt {2}/2$
$p_2= 3\sqrt {2}/2$
$p_3 = p_4 = 7 \sqrt{2}/2$
Case #2:
x=[4.1, -5.7, 1.4 ,-13.4 ,-7.6, -8.2, 5.2, -0.1, -11.6, -0.1];
y=[-0.7, -0.5, 0.9, 0.1, 0.4, 0.8, 0.2,0.6, 0.7, 1.2];
$$ Ax = \lambda x $$
$$ A= \begin{bmatrix} 4.1& -5.7& 1.4 &-13.4 &-7.6& -8.2& 5.2& -0.1& -11.6& -0.1 \\ -0.7& -0.5& 0.9& 0.1&0.4& 0.8& 0.2&0.6& 0.7& 1.2 \end {bmatrix} \begin {bmatrix} 4.1 & -0.7 \\ -5.7 & -0.5\\1.4 & 0.9\\-13.4 & -0.1\\-7.6&0.4\\-8.2&0.8\\5.2&0.2\\-0.1&0.6\\-11.6&0.7\\-0.1&1.2 \end{bmatrix} = \begin{bmatrix} 517.44&-16.96\\-16.96&4.69 \end {bmatrix}$$
$\lambda = 4.2, 517.9$; largest eigenvalue is 517.9. Corresponding eigenvector is $\begin{bmatrix} -30.3\\ 1 \end {bmatrix}$.
Since we want our $x$ to be a unit vector (with a positive first term), we get $x=\begin{bmatrix} 1 \\ -1/30.3 \end {bmatrix}$
***How is this unit length?***
***If the first entry is 1, and the second is non-zero, the mangitute is not one.***
*Finish with new p values
$p_1=\begin{bmatrix} 4.1 & -0.7 \end {bmatrix}\begin{bmatrix} 1 \\ -1/30.3 \end {bmatrix}=4.1$
$p_2=\begin{bmatrix} -5.7 & -0.5 \end {bmatrix}\begin{bmatrix} 1 \\ -1/30.3 \end {bmatrix}=-5.7$
.
.
.
$\begin{bmatrix} 1 & -1/30.3 \end {bmatrix} \begin{bmatrix} 4.1& -5.7& 1.4 &-13.4 &-7.6& -8.2& 5.2& -0.1& -11.6& -0.1 \\ -0.7& -0.5& 0.9& 0.1&0.4& 0.8& 0.2&0.6& 0.7& 1.2 \end {bmatrix}=$
\begin{bmatrix}4.1 &-5.7 &1.4 &-13.4 &-7.6 &-8.2 &5.2 &-0.1 &-11.6 &-0.1 \end{bmatrix}
***Numerically, you are close enough here***
Rotation Consideration:
"Any matrix of orthonormal vectors represents a rotation and/or reflection of the axes of a Euclidean space."
"PCA can be thought of as finding a new orthogonal basis by rotating the old axis until the directions of the maximum variance are found."
Each of the above cases can be represented by a rotation matrix formed with columns of normalized eigenvectors of A where $A=M^TM$.
Case #1:
$\begin{bmatrix}1/\sqrt{2} &-1/\sqrt{2} \\ 1/\sqrt{2}&1/\sqrt{2} \end{bmatrix}$
Case #2:
$\begin{bmatrix}1 &-1/30.3 \\ 1/30.3 &1 \end{bmatrix}$
***same issue of the eigenvectors of length not one***