1. The first column of V indicates that the prevalence of "America" and "American" equally contribute to 83% of the variance from the original data. So, if we were to create a column that was just the sum of the two counts, this column (multiplied by a constant, say) would retain 83% of the variance in the data, A.
Answered True (Class Discussion and Explanation)
In situation 1, the first eigenvector of $A^TA$ was approximately
$$v_1 = \frac{1}{\sqrt{2}} \begin{bmatrix}
1 \\ 1 \end{bmatrix}$$
This means the first principal component is a direction where "America" and "American" contribute equally.
The eigenvalue associated with this vector explains 83% of the total variance. This tells us that most of the variance in how presidents use these two words is captured by how often they use them together.
A new column that is just the sum of "America" + "American" counts is proportional to this eigenvector. Since PCA is scale-invariant which we standardize first; multiplying by a constant doesn't change variance explained. Thus, this new single column would retain ≈ 83% of the variance in the standardized dataset A.
Therefore, the first column V shows equal holdings on "America" and "American". Creating a summed column is equivalent to projecting onto this first principal component, and it would preserve about 83% of the variance in A.
2. For any matrix A, $A^⊤A$ will always contain positive eigenvalues, so singular values of A will always be strictly positive.
Answered False (Class Discussion and Explanation)
We built A by scaling the two columns ("america", "american"). Then
$$\frac{1}{n-1}A^TA = R = \begin{bmatrix}
1 & \rho \\
\rho & 1
\end{bmatrix}
$$
where $\rho = corr(america, american)$
Eigenvalues of R are:
$\lambda_1' = 1 + \rho,$ $\lambda_2' = 1 - \rho$
Therefore, eigenvalues of $A^TA$ are
$\lambda_1 = (n-1)(1+\rho),$ $\lambda_2 = (n-1)(1-\rho)$
From the situation 1, PC1 explained variance is about 83%, we solved $\frac{1+\rho}{2} = 0.83 \Rightarrow \rho = 0.66$
With n = 155 speeches, n - 1 = 154:
- $\lambda = 154 \cdot (1 + 0.66) = 154 \cdot 1.66 = 255.64$
- $\lambda = 154 \cdot (1 - 0.66) = 154 \cdot 0.34 = 52.36$
Both are positive here, so the singular values of A are $\sqrt{255.64} ≈ 15.99$ and $\sqrt{52.36} ≈ 7.24$ which are strictly positive in this dataset.
The claim says for any matrix A, $A^TA$ has strictly positive eigenvalues $\Rightarrow$ all singular values are strictly positive. That's not true because:
- In general, $A^TA$ is positive semidefinite, so its eigenvalues are non-negative($\ge 0$)
- If A is rank-deficient(its columns are linearly dependent), then $A^TA$ has at least one zero eigenvalue, hence a zero singular value.
Let's take a counter Example:
If "american" were perfectly proportional to "america" (correlation $\rho = 1$), then
$\lambda_1 = (n-1)(1+1) = 2(n-1),$
$\lambda_2 = (n-1)(1-1) = 0,$
Therefore, one singular value is 0. Thus, singular values are non-negative which means not always strictly positive.
3. Each singular value of D corresponds to a column of D. So, the smallest singular value(approximately 1.08) indicates a term that explains the smallest amount of variance in the data.
Answer False -- this question was not discussed in the class
A singluar value does not correspond to a single original column. It corresponds to a principal direction, i.e. a component.
In Situation 1 we reduced to the two standardized features
$x_1 =$ "america", $x_2 =$ "american", forming A (centered & scaled). Then
$$ \frac{1}{n-1} A^TA = R = \begin{bmatrix}
1 & \rho \\
\rho & 1
\end{bmatrix}$$
Eigenpairs of R:
$$\lambda_1' = 1 + \rho, \lambda_2' = 1 - \rho, v_1 = \frac{1}{\sqrt{2}} \begin{bmatrix}
1\\
1
\end{bmatrix}, v_2 = \frac{1}{\sqrt{2}} \begin{bmatrix}
1\\
-1
\end{bmatrix}
$$
$v_1$ is the sum direction (both terms together)
$v_2$ is the contrast direction (one term vs the other)
The eigenvalues of $A^TA$ are $\lambda_i = (n-1)\lambda'_i,$ and singular values of A are
$\sigma_1 = \sqrt{(n-1)(1+\rho)}, \sigma_2 = \sqrt{(n-1)(1-\rho)}$
with the observed explained variance $PVE_1 ≈ 0.83 \Rightarrow \rho = 0.66$
Hence (for $n = 155 \Rightarrow n-1 = 154$):
$\sigma_1 = \sqrt{154 \cdot 1.66} = 15.99, \sigma_2 = \sqrt{154 \cdot 0.34 ≈ 7.24}$
$\sigma_2$ is the smallest singular value which corresponds to the second component with direction $v_2 = \frac{1}{\sqrt{2}}(1, -1)$, i.e., the difference between "america" and "american". It does not pick a single term/column; it represents a direction in the 2D feature space.
So, regardless of the exact numeric value we observed, the interpretation is the same
Singular values measure variance captured by components and they do not map one-to one to original columns.
4. The column values in AV corresponds to the same x-y axis (i.e., the labelled tick marks) on the plotted scatterplot called "Data and Eigenvectors".
Answered False -- This question was not discussed in the class
The scatterplot places speeches in 2D space with coordinates(x, y):
- x: standardized count of "american"
- y: standarized count of "america"
So, the tick marks on the axis are in units of the original standardized variables.
A is the standardized data matrix with 155 rows and 2 columns
V is the matrix of eigenvectos of $A^TA$:
$$ V = \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}
\end{bmatrix}$$
When we multiply AV, we are rotating the data into PC basis.
$$ AV = \begin{bmatrix} PC1 & PC2 \end{bmatrix} = \begin{bmatrix} \frac{x+y}{\sqrt{2}} & \frac{x-y}{\sqrt{2}} \end{bmatrix}$$
The columns of AV are coordinates in the new PC1-PC2 system.
But the scatterplot's tick marks are still labelled "american" (x-axis) and "america"(y-axis).
That means the original scatterplot shows(x,y). The transformed data AV would instead plot(PC1, PC2). So the column values in AV do not correspond to the same x-y axis of the original scatterplot.

Therefore this statement is false because AV gives the coordinates in a rotated principal component basis (PC1, PC2) while the plotted scatterplot is in original standardized variables (american, america).
5. If the image matrix is called M, then the "Variance Explained ..." plot can be generated using only eigenvalues of $M^⊤M$ or $MM^T$
Answered True (Class Discussion and Explanation)
For the image Matrix $M \in R^{m \times n}$
$M = U \Sigma V^T, \Sigma = diag(\sigma_1, \sigma_2,..., \sigma_r), \sigma_1 \ge \sigma_2 \ge ... \ge 0$
Then
$M^TM = V \Sigma^2 V^T, MM^T = U \Sigma^2 U^T$
So the eigenvalues of $M^TM$ or $MM^T$ are exactly $\sigma^2_i$.
The variance is explained by the ith-component is proportional to $\sigma^2_i$. The cummulative explained variance after k component is:
$CEV(k) = \frac{\Sigma^k_{i=1}\sigma^2_i}{\Sigma^r_{i=1}\sigma^2_i} = \frac{\Sigma^k_{i=1}\lambda_i}{\Sigma^r_{i=1}\lambda_i},$
where $\lambda_i$ are the eigenvalues of $M^TM$ (or $MM^T$).
Therefore, the "Variance Explained", the plot can be generated using only the eigenvalues of $M^TM$ (or $MM^T$)
Example:
Let $$M = \begin{bmatrix}
1 & 0\\
0 & 2
\end{bmatrix}$$
Then $$ MM^T = \begin{bmatrix}
1 & 0\\
0 & 4
\end{bmatrix} \Rightarrow \lambda_1 = 4, \lambda_2 = 1 \Rightarrow \sigma_1 = 2, \sigma_2 = 1$$
Explained variance by PC1:
$\frac{\lambda_1}{lambda_1 + lambda_2} = frac{4}{4 + 1} = 0.8 (80\%)$
and cummulative after 2 componenets is 100%. This matches using $\sigma_i^2$ as well, since $\sigma^2_1 = 4, \sigma^2_2 = 1$
Therefore, in image SVD, the variance-explained curve is computed from ${\lambda_i} = {\sigma^2_i}$, i.e., the eigenvalues of $MM^T$ or $M^TM$.

In conclusion, While the eigenvalues contain the necessary information, we can’t say the plot is generated only from eigenvalues we must take their square roots and normalize to turn them into singular values before calculating cumulative variance.
6. The Truncated SVD for any matrix M will always reconstruct a different matrix than the original matrix.
Answered True
Correct Answer is False
(Class Discussion and Explanation)
Let the image matrix be $M \in R^{m\times n}$ with SVD
$M = U \Sigma V^T$, $\Sigma = diag(\sigma_1, \sigma_2,...,\sigma_r), \sigma_1 \ge .... \ge \sigma_T \gt 0$
where $r = rank(M)$ equals the number of nonzero singular values.
The truncated SVD at rank k is:
$M_k = U_k \Sigma_k V^T_k,$
where U_k contains the first k columns of V.
Exact reconstruction when we don't actually Truncate
If k = r (i.e. we keep all non-zero singular values),
$M_r = U_r \Sigma_r V^T_r = U \Sigma V^T = M$
So the truncated SVD does not change the matrix in this case.
When $k \lt r$, we discard some singular values. The reconstruction error (Frobenius norm) is:
$|| M-M_k ||^2_F = \sum_{i = k+1}^{r} \sigma_i^2 \gt 0, \text{ so } M_k = M$
In the dog-image example, we picked k = 100, The image matrix typically has rank r > 100, so we dropped singular values $\sigma_{101}, \sigma_{102},... \sigma_r$, making M_100 a compressed approximation of M.
Example:
Let's take a simple full rank matrix with two non-zero singular values:
$$\begin{bmatrix}
2 & 0\\
0 & 1
\end{bmatrix} = U \Sigma V^T \text{ with } U = I, V = I, \Sigma = diag(2, 1), r = 2 $$
- keep all $(k = 2 = r): M_2 = U \Sigma V^T = M$ (exact).
- Truncate $$(k = 1 \lt r): M_1 = U_1 diag(2,1)V_1^T = \begin{bmatrix}
2 & 0\\
0 & 1
\end{bmatrix} ≠ M, \text{ M, and } ||M - M_1||^2_F = 1^2. = 1$$ (equals the sum of the discarded $\sigma^2_2$)
Therefore, In Situation 2, truncating with 𝑘=100
gave a slightly different image, but keeping all singular values would perfectly reconstruct the original. So the claim that “Truncated SVD always reconstructs a different matrix” is false.
7. Applying Truncated SVD can reduce the rank of an image matrix.
Answer True - this quesion is not discussed in the class
Truncating the SVD reduces the rank whenever we keep fewer singular values than the original rank.
Let the image matrix be $M \in R^{m \times n}$ with SVD
$M = U \Sigma V^T$, $\Sigma = diag(\sigma_1, \sigma_2,...,\sigma_r), \sigma_1 \ge .... \ge \sigma_r \gt 0$
where $r = rank(M)$ equals the number of nonzero singular values.
The truncated SVD at rank k is:
$M_k = U_k \Sigma_k V^T_k,$
where $U_k \in R^{m \times k}, \Sigma_k = diag(\sigma_1,....,\sigma_k), V_k \in R^{n \times k}$
- Since $\Sigma_k$ has only k(positive) diagnol entries,
$rank(M_k) = k$
Therefore, if $k \lt r,$ the truncated reconstruction has strictly smaller rank:
$rank(M_k) = k \lt r = rank(M)$
Example:
$$ \begin{bmatrix}
2 & 0\\
0 & 1
\end{bmatrix}, \sigma_1 = 2, \sigma_2 = 1, r = 2 $$
- Rank-1 Truncation(k = 1):
$$M_1 = U_1 diag(2)V^T_1 = \begin{bmatrix}
2 & 0\\
0 & 0
\end{bmatrix},
\text{which has rank } $(M_1) = 1$(reduced from 2) $$
Best rank-k approximation:
$|| M-M_k ||^2_F = \sum_{i = k+1}^{r} \sigma_i^2$
The variance(energy) retained by $M_k$ is $\Sigma^k_{i = 1} \sigma^2_i$, the rest is lost.
In situation 2, when we kept k = 100 singular values for the dog image, we produced a rank-100 approximation, i.e., a lower-rank matrix than the original(which typically has $r \gt 100$)
Therefore, In Situation 2, truncating the SVD (e.g., keeping only 100 singular values) reduces the rank of the image matrix, which is why the compressed image is an approximation of the original.
8. The fedility(or accuracy) of a compressed image depends on k, and after a certain point, increasing k has diminishing returns.
Answer True -- this question is not discussed in the class
Fidelity depends on k because we retain more singular values (variance/energy) as k grows. But since singular values get smaller, the marginal improvement shrinks - after certain point, extra components add almost no visible difference.
We can measure fidelity in SVD compression on the image Matrix M:
$M = U \Sigma V^T$
the singular values $\sigma_1 \ge \sigma_2 \ge...\ge \sigma_T$ measures the amount of energy each component carries.
If we keep the top k singular values and reconstruct:
$M_k = U_k \Sigma_k V^T_k$
then the fidelity of $M_k$ relative to M is measured by how much of the total energy is preserved:
$Fidelity (k) = \frac{\Sigma^k_{i=1}\sigma^2_i}{\Sigma^r_{i=1}\sigma^2_i}$
If k is small, we only keep the largest singular values $\rightarrow$ the reconstruction misses a lot of details.
As k increases, you add more singular values $\rightarrow$ fidelity improves because more of the image's variance is retained.
The singular values usually decay rapidly:
$\sigma^2_1 \gg \sigma^2_2 \gg \sigma^2_3 \gg ...$
which means:
- The first few singular values capture the major structures (broad shapes, contrasts).
- Later singular values capture fine details/noise
So each extra component beyond a certain point only contributes a very small amount of new information:
$\Delta (k) = \frac{\sigma^2_k}{\Sigma^r_{i=1}\sigma^2_i}$ gets smaller as $k \rightarrow r$
Example:
Lets have squared singular values as:
${100, 25, 9, 4, 1} \rightarrow total = 139$
- Top 1: $\frac{100}{139}$ ≈ 72\%
- Top 2: $\frac{125}{139}$ ≈ 90\%
- Top 3: $\frac{134}{139}$ ≈ 96\%
- Top 4: $\frac{138}{139}$ ≈ 99\%
- Top 5: $\frac{139}{139}$ ≈ 100\%\
As we see after ~ 3-4 components, each extra component only improves fidelity by ~ 1-2\%\. That's diminishing returns.
Therefore, the accuracy of the compressed image depends directly on 𝑘. Larger 𝑘 means higher fidelity, but after a certain point the improvement is marginal compared to the cost of storing extra singular values.