---
tags: Noah
---
:::info
Noah Nübling
Machine Learning with MATLAB
WS 2020/21
:::
# 05
## Problem 1
Small random fluctuations will be ignored by the loss function if we set an appropriate $\epsilon$.
If we use an appropriate $\epsilon$ that can help us avoid integrating smaller fluctuations into the model to the point where the model generalizes less well (overfitting)
But if we choose an $\epsilon$ which is too large that can lead to underfitting because we're removing more and more information about the data the larger we choose out $\epsilon$
## Problem 2
Solution is given on page 370-371 in the Book.
Only the variable names are a little different:
$\text {(Variable name in Book)} \rightarrow \text {(Variable name in Problem)}$
$t \rightarrow i$
$\xi_+^t \rightarrow \hat{\xi}^{(i)}$
$\xi_-^t \rightarrow \xi^{(i)}$
$r^t \rightarrow y^{(i)}$
$w^Tx \rightarrow w\Phi(x^i)$
$w_0 \rightarrow b$
Where $w^Tx$ is actually something different than $w\Phi(x^i)$ but for the purpose of our calculations that does not matter (I hope).
So the solution should be:
$$
L_d = -\frac{1}{2}\sum_{i}^{m} \sum_{j}^{m} (\hat{\alpha}^{(i)} - \alpha^{(i)})(\hat{\alpha}^{(j)} - \alpha^{(j)})(x^{(i)})^T x^{(j)}
$$
$$- \epsilon \sum_{i}^{m} (\hat{\alpha}^{(i)} + \alpha^{(i)}) - \sum_{i}^{m} y^{(i)} (\hat{\alpha}^{(i)} - \alpha^{(i)})
$$
Subject to:
$$
0 \leq \hat{\alpha}^{(i)}\leq C, 0 \leq \alpha^i \leq C, \sum_{i}^{m} (\hat{\alpha}^i - \alpha^i) = 0
$$