# NLP2
## Partie 3
### Question a
We want $c ≃ v_j$ for $j \in [1, n]$, as $\{v_1, \dots, v_n\}$ is a random set of vectors:
$$
(E) :
\alpha_i =
\begin{cases}
1 & \text{if $i = j$} \\
0 & \text{else}
\end{cases}
$$
The $\alpha_i$ are define by:
$$
\alpha_i = \frac{\exp(k_i^t \cdot q)}{\sum_{j=1}^{n} \exp(k_j^t \cdot q)}
$$
So $(E)$ is not solvable, but we can use the limits to look for a close behavior.
- if $i = j$ then we want $\exp(k_j^t \cdot q) \to + \infty$, so $k_j^t \cdot q \to + \infty$
- if $i \neq j$ then we want $\exp(k_i^t \cdot q) \to + 0$, so $k_i^t \cdot q \to - \infty$
### Question b
Suppose $\{k_1, \dots k_n\}$ vectors of $\mathbb R^d$ as:
$$
\forall i \neq j, k_i \perp k_j \\
\forall i, \lVert k_i \rVert = 1
$$
Suppose $\{v_1, \dots v_n\}$ vectors of $\mathbb R^d$ undefined, and $(a, b) \in [1, n]²$. We want:
$$
c = \frac{1}{2}(v_a + v_b)
$$
So :
$$
(E_1) :
\alpha_i =
\begin{cases}
\frac{1}{2} & \text{if $i \in \{a, b\}$} \\
0 & \text{else}
\end{cases}
$$
$$
(E_1) :
\begin{cases}
\frac{1}{2} = \frac{\exp(k_a^t \cdot q)}{\exp(k_a^t \cdot q) + \exp(k_b^t \cdot q) +\sum_{j=1, j \not \in \{a, b\}}^{n} \exp(k_j^t \cdot q)}\\
\frac{1}{2} = \frac{\exp(k_b^t \cdot q)}{\exp(k_a^t \cdot q) + \exp(k_b^t \cdot q) +\sum_{j=1, j \not \in \{a, b\}}^{n} \exp(k_j^t \cdot q)}\\
0 = \frac{\exp(k_{i \not \in \{a, b\}}^t \cdot q)}{\exp(k_a^t \cdot q) + \exp(k_b^t \cdot q) +\sum_{j=1, j \not \in \{a, b\}}^{n} \exp(k_j^t \cdot q)} \\
\end{cases}
$$
$$
(E_1) :
\begin{cases}
\exp(k_a^t \cdot q) = \exp(k_b^t \cdot q) \\
\frac{1}{2} = \frac{\exp(k_a^t \cdot q)}{2\exp(k_a^t \cdot q) +\sum_{j=1, j \not \in \{a, b\}}^{n} \exp(k_j^t \cdot q)}\\
0 = \frac{\exp(k_{i \not \in \{a, b\}}^t \cdot q)}{2 \exp(k_a^t \cdot q) +\sum_{j=1, j \not \in \{a, b\}}^{n} \exp(k_j^t \cdot q)} \\
\end{cases}
$$
$$
(E_1) :
\begin{cases}
(k_a - k_b)^t \cdot q = 0 \\
\frac{1}{2} = \frac{1}{2 + \sum_{j=1, j \not \in \{a, b\}}^{n} \exp((k_j - k_a)^t \cdot q)}\\
0 = \frac{1}{1 + \sum_{j=1, j \neq i}^{n} \exp((k_j - k_i)^t \cdot q)} \\
\end{cases}
$$
Let's $\alpha$ be the angle between $(k_j - k_i)$ and $q$, and with the properties of orthogonality and norms of the $k_i$, we have:
$$
\forall i \neq j, \\
(k_j - k_i)^t \cdot q = \lVert (k_j - k_i) \rVert \cdot \lVert q \rVert \cdot \cos(\alpha) \\
= 2\sqrt{2} \cdot \lVert q \rVert \cdot \cos(\alpha)
$$
So $(E1)$ is not solvable, but can be approach with the following conditions :
$$
q = z \cdot (k_a + k_b) \text{ with } z \to + \infty
$$
Then :
$$
(E_1) :
\begin{cases}
(k_a - k_b)^t \cdot q = 0 \\
\frac{1}{2} = \frac{1}{2 + \sum_{j=1, j \not \in \{a, b\}}^{n} \exp(-z)}\\
0 = \frac{1}{1 + 2 \exp(z)} \\
\end{cases}
$$
That is symptotically verified.
### Question c
Suppose $k_i \sim \mathit{N(\mu_i, \Sigma)}$
#### c.i
Suppose $\Sigma_i = \alpha \cdot I$ with $\alpha \to 0$
Then $k_i ≃ \mu_i$, so using the previous question $q = z \cdot (\mu_a + \mu_b)$ with $z \to + \infty$.
#### c.ii
Suppose $\Sigma_\alpha = \alpha \cdot I + \frac{1}{2} (\mu_a \cdot \mu_a^t)$ with $\alpha \to 0$
$\Sigma_a = \alpha \cdot I + \frac{1}{2}\lVert \mu_a \rVert ^2$
$\Sigma_a = \alpha \cdot I + \frac{1}{2}$
So each components of $\mu_i$ varies independantly of the variable with a variance of 1/2. In others words, we have :
$$
k_i ≃ \mu_i \otimes
\begin{pmatrix}
n_1 \\
n_2 \\
\vdots \\
n_n
\end{pmatrix}
\text{ with }
\forall k \in [1, n], n_k \sim \mathit{N}(1, \frac{1}{2})
$$
So
$$
q = z \cdot (n_a \otimes \mu_a + n_b \otimes \mu_b) \text{ with } z \to + \infty
$$
Qualitatively we expect all dimensions to be gaussian distributed by $\mathit{N}(1, \frac{1}{2})$, then c will be approximatively equal to:
$$
c = \frac{1}{2}(v_a + v_b)
$$
### Question d
#### d.i
Let's define:
$$
q_1 = z_1 \cdot \mu_a \text{ with } z_1 \to + \infty \\
q_2 = z_2 \cdot \mu_b \text{ with } z_2 \to + \infty
$$
#### d.ii
Like in c.ii, the quantities will approximatively be equal to
$$
c_1 ≃ v_a \\
c_2 ≃ v_b
$$
So
$$
c ≃ \frac{1}{2} (c_1 + c_2)
$$