---
tags: machine-learning
---
# Support Vector Machine
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/support-vector-machine/1.png" height="100%" width="70%">
</div>
> This is my personal notes taken for the course [Machine learning](https://www.coursera.org/learn/machine-learning#syllabus) by Standford. Feel free to check the [assignments](https://github.com/3outeille/Coursera-Labs).
> Also, if you want to read my other notes, feel free to check them at my [blog](https://ferdinandmom.engineer/machine-learning/).
## I) Intuition
The goal of SVM (Support Vector Machine) algorithm is to draw a line that will divide your training data into positive and negative samples as best as possible, then classify new data by determining on which side of the hyperplane they lie on.
Consider the following figure in which x's represent positive training examples, o's denote negative training examples, a decision boundary (this is the line given by the equation $\theta^Tx=0$, and is also called the **separating hyperplane**) is also shown, and three points have also been labeled A, B and C.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/support-vector-machine/2.png">
</div>
Notice that the point A is very far from the decision boundary. If we are asked to make a prediction for the value of y at A, it seems we should be quite confident that $y = 1$ there.
Conversely, the point C is very close to the decision boundary, and while it’s on the side of the decision boundary on which we would predict $y = 1$, it seems likely that just a small change to the decision boundary could easily have caused our prediction to be $y = 0$. Hence, we’re much more confident about our prediction at A than at C.
<ins>**Remark:**</ins> It seems that the **farther** we are **from the decision boundary**, the **more confident** we are on our **prediction**.
Thus, in order to find a hyperplane that will divide our training data into 2 samples as best as possible, we need to make this hyperplane the **farthest** as possible **from each sample**.
To do so, we will introduce the notion of **margin**.
## II) Notation
Let:
- **x** be a feature vector (i.e., the input of the SVM). $x \in \mathbb{R}^n$, where $n$ is the dimension of the feature vector.
- **y** be the class (i.e., the output of the SVM). $y \in \{−1,1\}$, i.e. the classification task is binary).
- **w** and **b** be the parameters of the SVM: we need to learn them using the training set.
- ($x^{(i)}$, $y^{(i)}$) be the $i^{th}$ sample in the dataset. Let's assume we have $N$ samples in the training set.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/support-vector-machine/3.png" width="70%">
</div>
<br>
The class $y$ is determined as follows:
$$y^{(i)}=\left\{
\begin{array}{ll}
-1 &if \;\> \mathbf{w^T}\mathbf{x}^{(i)}+b \leq -1 \\
1 &if \;\> \mathbf{w^T}\mathbf{x}^{(i)}+b \ge 1 \\
\end{array}
\right.$$
which can be more concisely written as $y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1$.
The margins are the hyperplanes with the following equations:
$$\left\{
\begin{array}{ll}
w \cdot x + b = 1 \; (same \;\> as \;\> w^Tx + b = 1)
\\
w \cdot x + b = -1 \; (same \;\> as \;\> w^Tx + b = -1)
\end{array}
\right.$$
To make the decision boundary as far as possible from each sample, we need to **maximize the margin**.
## III) Goal
The SVM aims at satisfying 2 requirements:
1. The SVM should maximize the distance between the two decision boundaries (maximize the margin). Mathematically, that means that we want to maximize the distance between the 2 hyperplanes defined by $w^Tx+b=-1$ and by $w^Tx+b=1$. This distance is equal to $\frac{2}{\left \| w \right \|}$ [(Explanation of why $\frac{2}{\left \| w \right \|}$)](#1). This means we want to solve $max \frac{2}{\left \| w \right \|}$. Equivalently, we want to solve $min \frac{1}{2} \left \| w \right \|^2$. [(Explanation of why $min \frac{1}{2} \left \| w \right \|^2$)](#2).
2. The SVM should also correctly classify all $x^{(i)}$, which means:
$$y^{(i)}(w^Tx^{(i)}+b) \geq 1, \forall i \in \{1,…,N\}$$
Which leads us to the following quadratic optimization problem:
$$
\boxed{
\begin{array}{}
\min_{\mathbf{w},b}\quad \frac{1}{2}\|w\|^2
\\
s.t.\quad y^{(i)} (w^Tx^{(i)}+b) \ge 1, \; \forall i \in \{1,\dots,N\}
\end{array}
}$$
This is the **hard-margin SVM**, as this quadratic optimization problem admits a solution only if the data is linearly separable.
One can relax the constraints by introducing so-called **slack variables** $\xi^{(i)}$. Note that each sample of the training set has its own slack variable. This gives us the following quadratic optimization problem:
$$
\boxed{
\begin{array}{}
\min_{\mathbf{w},b} \quad \frac{1}{2}\|w\|^2 + C \sum_{i=1}^{N} \xi^{(i)}
\\
\\
s.t \quad y^{(i)} (w^Tx^{(i)}+b) \ge 1 - \xi^{(i)}, &\forall i \in \{1,\dots,N\}
\\
\quad \quad \; \xi^{(i)}\ge 0, &\forall i \in \{1,\dots,N\}
\end{array}
}$$
This is the **soft-margin SVM**. C is a hyperparameter called **penalty of the error term**. [(Explanation of slack variables / C influence)](#3).
One can add even more flexibility by introducing a function $\phi$ that maps the original feature space to a higher dimensional feature space. This allows non-linear decision boundaries (wiggly shape). The quadratic optimization problem becomes:
$$\boxed{
\begin{array}{}
\min_{\mathbf{w},b}\quad \frac{1}{2}\|w\|^2 + C \sum_{i=1}^{N} \xi^{(i)}
\\
\\
s.t.\quad y^{(i)} (w^T\phi \left(x^{(i)}\right) +b) \ge 1 - \xi^{(i)}, &\forall i \in \{1,\dots,N\}
\\
\quad \quad \; \xi^{(i)}\ge0, &\forall i \in \{1,\dots,N\}
\end{array}
}$$
## IV) Optimization
The quadratic optimization problem can be transformed into another optimization problem named the **Lagrangian dual problem**. [(Explanation of why using Lagrangian dual problem)](#4)
<ins>**Case 1: Hard Margin SVM**</ins>
We define the Lagrangian $\mathcal{L}(w,b)$ as follow:
$$\boxed{
\mathcal{L}(w,b,\alpha)=f(w)+\sum_{i=1}^l\alpha_ih_i(w,b)
}$$
with $\alpha_i$, the Lagrangian multipliers.
The function for **Primal** problem is:
$$\boxed{
min_{w,b} \quad \mathcal{L}(w,b,\alpha) = \frac{1}{2}\|w\|^2 - \sum_{i=1}^{m}\alpha_i[y^{(i)} (w^Tx^{(i)}+b)-1]
}$$
In order to minimize it, we need to differentiate which respect to $w, b$ and set their derivatives to 0, we obtain the function for **Dual** problem. [(Explanation of how to get Primal / Dual problem)](#5)
The function for **Dual** problem (**Wolf dual problem**) is:
$$\boxed{
\begin{array}{ll}
\text{max}_\alpha \mathcal{W}(\alpha) = \sum_{i=1}^{m}{\alpha_i} - \frac{1}{2}\sum_{i=1}^{m}{\sum_{j=1}^{m}{y^{(i)} y^{(j)} \alpha_i \alpha_j x^{(i)^T} x^{(j)}}}
\\
\text{s.t.} \quad \forall i: \alpha_i \ge 0
\\
\quad \quad \; \sum_{i=1}^{m}{y^{(i)} \alpha_i} = 0
\end{array}
}$$
[(Explanation of why bother with the dual problem + why maximize alpha)](#6)
<ins>**Case 2: Soft Margin SVM**</ins>
We define the Lagrangian $\mathcal{L}(w,b)$ as follow:
$$\boxed{
\mathcal{L}(w,b,\zeta,\alpha,\beta) = f(w,\zeta) + \sum_{i=1}^l\alpha_ih_i(w,b,\zeta) + \sum_{i=1}^l\beta_i g_i(\zeta)
}$$
with $\alpha_i, \beta_i$, the Lagrangian multipliers.
The function for **Primal** problem is:
$$\boxed{
min_{w,b,\zeta} \quad \mathcal{L}(w,b,\zeta,\alpha,\beta) = \frac{1}{2}\|w\|^2 + C\sum_{i=1}^{m}\zeta_i - \sum_{i=1}^{m}\alpha_i[y^{(i)} (w^Tx^{(i)}+b)-1 + \zeta_i] - \sum_{i=1}^{m}\beta_i\zeta_i}$$
In order to minimize it, we need to differentiate which respect to $w,b,\zeta$ and set their derivatives to 0, we obtain the function for **Dual** problem.
The function for **Dual** problem (**Wolf dual problem**) is:
$$\boxed{
\begin{array}{ll}
max_\alpha \quad \mathcal{W}(\alpha) = \sum_{i=1}^{m}{\alpha_i} - \frac{1}{2}\sum_{i=1}^{m}{\sum_{j=1}^{m}{y^{(i)} y^{(j)} \alpha_i \alpha_j x^{(i)^T} x^{(j)}}}
\\
\text{s.t.} \quad \forall i: 0 \leq \alpha_i \leq C
\\
\quad \quad \; \sum_{i=1}^{m}{y^{(i)} \alpha_i} = 0
\end{array}
}$$
## V) Making a prediction
Once the $\alpha^{(i)}$ are learned, one can predict the class of a new sample with the feature vector $x^{(test)}$ as follows:
$$\boxed{
\begin{array}{ll}
y^{\text {test}}&=\text {sign}(w^{*T} x^{\text {test}}+b^*)
\\
&= \text {sign}(\sum_{i =1}^{N}\alpha^{(i)}y^{(i)}(x^{(i)})^Tx^{\text {test}}+b^*)
\\
&= \text {sign}(\sum_{i =1}^{N}\alpha^{(i)}y^{(i)} \left \langle x^{(i)}, x^{\text {test}} \right \rangle +b^*)
\end{array}
}$$
The $\alpha^{(i)} \geq 0$ are called **support vectors**.
## VI) Kernel Trick
Now, the Soft Margin SVM can handle the non-linearly separable data caused by noisy data. What if the non-linear separability is not caused by the noise? What if the data are characteristically non-linearly separable? Can we still separate the data using SVM? The answer is yes. And we’ll talk about a technique called **kernel trick** to deal with this.
Image you have a two-dimensional non-linearly separable dataset, you would like to classify it using SVM. It looks impossible because the data is not linearly separable. However, if we transform the two-dimensional data to a higher dimension, say, three-dimension or even ten-dimension, we would be able to find a hyperplane to separate the data.
The problem is, if we have a large dataset containing, say, millions of examples, the transformation will take a long time to run, let alone the calculations in the later optimization problem. Let’s revisit the Wolfe dual problem (for hard-margin SVM case):
$$
\boxed{
\begin{array}{ll}
max_\alpha \quad \mathcal{W}(\alpha) = \sum_{i=1}^{m}{\alpha_i} - \frac{1}{2}\sum_{i=1}^{m}{\sum_{j=1}^{m}{y^{(i)} y^{(j)} \alpha_i \alpha_j x^{(i)^T} x^{(j)}}}
\\
\text{s.t.} \quad \forall i: \alpha_i \ge 0
\\
\quad \quad \; \sum_{i=1}^{m}{y^{(i)} \alpha_i} = 0
\end{array}
}$$
To solve this problem, we actually only care about the result of the dot product $\left \langle x_i, x_j \right \rangle$. If there is a function which could calculate the dot product and the result is the same as when we transform the data into higher dimension, it would be fantastic. This function is called a **kernel function**.
Let's denote $\mathcal{K}(x^{(i)},x^{(j)})= \left \langle x^{(i)}, x^{(j)}, \right \rangle$, the **kernel function**. We rewrite the Wolfe dual problem:
$$\boxed{
\begin{array}{ll}
max_\alpha \quad \mathcal{W}(\alpha) = \sum_{i=1}^{n}{\alpha_i} - \frac{1}{2}\sum_{i=1}^{n}{\sum_{j=1}^{n}{y^{(i)} y^{(j)} \alpha_i \alpha_j \mathcal{K}(x^{(i)}, x^{(j)})}}
\\
\text{s.t.} \quad \forall i: \alpha_i \ge 0
\\
\quad \quad \; \sum_{i=1}^{n}{y^{(i)} \alpha_i} = 0
\end{array}
}$$
There are multiples kernels such as:
- **Polynomial kernel (of degree $d$ and coefficient $r$)**:
$$K_P(\mathbf{x}_i, \mathbf{x}_j) = (\langle \mathbf{x}_i, \mathbf{x}_j \rangle + r)^p$$
- **Gaussian kernel** (RBF kernel):
$$K_{\text{Gauss}}(\mathbf{x}_i, \mathbf{x}_j) = e^{\frac{-\gamma\|\mathbf{x}_i - \mathbf{x}_j\|^2}{2 \sigma^2}}$$
- **Sigmoid kernel** ($\gamma$ = how much influence single training examples have):
$$K_{\text{tanh}}(\mathbf{x}_i, \mathbf{x}_j) = \tanh(\gamma \langle \mathbf{x}_i, \mathbf{x}_j \rangle - r)$$
## VII) Choosing parameters
Choosing C (recall that C = $\frac{1}{\lambda}$):
- If C is large, then we get higher variance/lower bias.
- If C is small, then we get lower variance/higher bias.
For the Polynomial kernel, a $p$ value with 1 is just the linear kernel (SVM without kernel). A larger value of $p$ will make the decision boundary more complex and might result in overfitting.
The other parameter we must choose is $\gamma = \frac{1}{2\sigma^2}$ from the Gaussian Kernel function:
- With a large $\gamma$, the SVM decision boundary will simply be dependent on just the points that are closest to the decision boundary, effectively ignoring points that are farther away (high variance / low bias).
- With a small $\gamma$ will result in a decision boundary that will consider points that are further from it (low variance / high bias).
## VIII) Logistic Regression vs SVM
Let's denote $n$ the number of features and $m$ the number of training
examples.
- If $n$ is large (relative to $m$), then use logistic regression, or SVM without a kernel (the \"linear kernel\"). (E.g $n \geq m, n = 10000, m = 10 \sim 1000$). We don't have enough examples to use a complex polynomial hypothesis.
- If $n$ is small and $m$ is intermediate, then use SVM with a Gaussian Kernel. (E.g $n = 1 \sim 1000, m = 10 \sim 10000$). We have enough examples that we may need a complex non-linear hypothesis.
- If $n$ is small and $m$ is large, then manually create/add more features, then use logistic regression or SVM without a kernel. (E.g $n = 1 \sim 1000, m = 50000+$). We want to increase our features so that logistic regression becomes applicable.
## IX) Bonus
- <a id="1">**Explanation of why $\frac{2}{\left \| w \right \|}$:**</a>
Let $x_0$ be a point in the hyperplane $w^Tx + b = -1$. To measure the distance between $w^Tx + b = -1$ to $w^Tx + b = 1$, we only need to compute the perpendicular distance from $x_0$ to plane $w^Tx + b = 1$, denoted as $m$. Since $\vec{w}$ is an unitary vector, $\vec{w} = \frac{w}{\|w\|}$.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/support-vector-machine/4.png">
</div>
<br>
Thus, we have:
$$w^T(x_0 + m\frac{w}{\|w\|}) + b = 1$$
Expanding this equation, we have:
$$
\begin{array}{ll}
&w^Tx_0 + m\frac{w^Tw}{\|w\|} + b = 1
\\
&w^Tx_0 + m\frac{\|w\|^2}{\|w\|} + b = 1
\\
&w^Tx_0 + b + m\|w\| = 1
\\
&-1 + m\|w\| = 1
\\
&\boxed{m = \frac{2}{\|w\|}}
\end{array}
$$
<ins>**Remark:**</ins> $w^Tw = \left \langle w, w \right \rangle and \left \langle w, w \right \rangle = \|w\| * \|w\| * cos(0) = \|w\|^2$
---
- <a id="2">**Explanation of why $min \frac{1}{2} \left \| w \right \|^2$:**</a>
$max \frac{2}{\|w\|} \sim max \frac{1}{\|w\|} \sim min \frac{1}{2}\|w\|^2$ (we squared to get rid of the square root in the norm)
---
- <a id="3">**Explanation of slack variables / C influence:**</a>
Even if the underlying process which generates the features for the two classes is linearly separable, noise can make the data not separable. The introduction of slack variables to relax the requirement of linear separability solves this problem. The trade-off between accepting some errors and a more complex model is weighted by a parameter C.
C gives you control of how the SVM will handle errors. If we set C to positive infinite, we will get the same result as the Hard Margin SVM. However, if we set C to 0, there will be no constraint anymore, and we will end up with a hyperplane not classifying anything.
The rules of thumb are:
- **small values of C** will result in a **wider margin**, at the cost of some misclassifications.
- **large values of C** will give you the **Hard Margin classifier** and tolerates **zero constraint violation**.
We need to find a value of C which will not make the solution be impacted by the noisy data.
---
- <a id="4">**Explanation of why using Lagrangian dual problem:**</a>
Our goal was to solve this quadratic optimization problem:
$$\boxed{
\begin{array}
\text{min}_{\mathbf{w},b}\quad \frac{1}{2}\|w\|^2
\\
s.t.\quad y^{(i)} (w^Tx^{(i)}+b) \ge 1, \; \forall i \in \{1,\dots,N\}
\end{array}
}$$
####
In order to find the solution to an optimization problem with constraints, we will be using the **Lagrange Multipliers**. It enable us to have a new expression that we can minimize/maximize without thinking of the constraint. So our goal was to minimize $\frac{1}{2}\|w\|^2$, thus we have to minimize the function of primal problem.
---
- <a id="5">**Explanation of how to get Primal / Dual problem:**</a> (Explain how to get b too)
Lagrange stated that if we want to find the minimum of $f$ under inequality constraint g, we just need to solve for:
$$\begin{array}{ll}
\nabla \mathcal{L}(x^*) = \nabla f(x^*)-\alpha \nabla g(x^*)=0
\\
\\
\; with \> \alpha, \> the \> Lagrange \> multipliers
\end{array}$$
Such that $x^*$ (as an extremum for the optmization problem) has to satisfy all the **KKT conditions**.
<ins>**KKT Conditions**</ins>
1. Primal Feasibility: $g(x^*) \leq 0$
2. Dual Feasibility: $\alpha \geq 0$
3. Complementary Slackness: $\alpha g(x^*)=0$
4. Lagrangian Stationarity: $\nabla f(x^*)-\alpha \nabla g(x^*)=0$
Initially, we have this:
$$\boxed{
\begin{array}
\text{min}_{w,b}\quad \frac{1}{2}\|w\|^2
\\
s.t.\quad y^{(i)} (w^Tx^{(i)}+b) \ge 1, \; \forall i \in \{1,\dots,N\}
\end{array}
}$$
By applying, the **Primal Feasibility**, we have the **Primal problem**:
$$\boxed{
min_{w,b} \quad \mathcal{L}(w,b,\alpha) = \frac{1}{2}\|w\|^2 - \sum_{i=1}^{m}\alpha_i[y^{(i)} (w^Tx^{(i)}+b)-1]
}$$
By applying the **Lagrangian Stationarity** condition, we will have:
$$\begin{array}{ll}
\nabla_w \mathcal{L}(w,b,\alpha) = w - \sum_{i=1}^{m}{\alpha_iy_ix_i}=0 \implies \boxed{w = \sum_{i=1}^{m}{\alpha_iy_ix_i}}
\\
\nabla_b \mathcal{L}(w,b,\alpha) = -\sum_{i=1}^{m}{\alpha_iy_i}=0 \implies \boxed{\sum_{i=1}^{m}{\alpha_iy_i} = 0}
\end{array}$$
Substituting to Lagrangian function $\mathcal{L}$, we get the **Wolfe dual problem**:
$$\boxed{
\mathcal{W}(\alpha) = \sum_{i=1}^{n}{\alpha_i} - \frac{1}{2}\sum_{i=1}^{n}{\sum_{j=1}^{n}{y^{(i)} y^{(j)} \alpha_i \alpha_j x^{(i)^T} x^{(j)}}}
}$$
By applying the **Dual Feasibility**, we thus have:
$$\boxed{
\begin{array}{ll}
max_\alpha \quad \mathcal{W}(\alpha) = \sum_{i=1}^{n}{\alpha_i} - \frac{1}{2}\sum_{i=1}^{n}{\sum_{j=1}^{n}{y^{(i)} y^{(j)} \alpha_i \alpha_j x^{(i)^T} x^{(j)}}}
\\
\\
\text{s.t.} \quad \forall i: \alpha_i \ge 0
\\
\quad \quad \; \sum_{i=1}^{n}{y^{(i)} \alpha_i} = 0
\end{array}
}$$
After finding the $\alpha$ such that the function $\mathcal{W}$ is maximized. We can then use the $\alpha$ and plugged it here $w^* = \sum_{i=1}^{m}{\alpha_iy_ix_i}$ to find $w^*$.
$w$ tells us the direction of the hyperplane. See figure below.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/support-vector-machine/5.png">
</div>
<br>
Now, we need to find $b^*$ because it will tells us the position of the hyperplane.
To do so, we apply **Complementary Slackness** condition:
$$\begin{array}{ll}
&\alpha g(x)=0
\\
&\alpha \left [y^{(i)} (w^{*T} x^{(i)}+b^*)-1 \right] = 0
\\
&(y^{(i)})^2 (w^{*T}x^{(i)}+b^*) = y^{(i)}
\\
& (w^{*T}x^{(i)}+b^*) = y^{(i)}
\\
&\boxed{b^* = \frac{1}{S}\sum_{i=1}^S(y_i-w^{*T}x^{(i)})}
\end{array}$$
<ins>**Remark:**</ins>
$(y^{(i)})^2 = 1$ because $y^{(i)}$ is a vector full of -1 or 1. $S$ is equal to the number of support vectors.
Another way is to first find the worst (closest) positive example ($min_{i:y^{(i)}=1} \> w^{*T}x^{(i)}$) and the worst (closest) negative example ($max_{i:y^{(i)}=-1} \> w^{*T}x^{(i)}$). Then divide by 2 to get the position of the hyperplane.
Thus,
$$\boxed{b^* = \frac{max_{i:y^{(i)}=-1} \> w^{*T}x^{(i)} - min_{i:y^{(i)}=1} \> w^{*T}x^{(i)}}{2}}$$
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/support-vector-machine/6.png">
</div>
---
- <a id="6">**Explanation of why bother with the dual problem + why maximize alpha:**</a>
We bother with the dual problem because it enables us to have equation which only depends on $\alpha^{(i)}$ since we know that most of the $\alpha^{(i)} = 0$ except the support vectors one ($\alpha^{(i)} \geq 0$). This term is **very efficient** to calculate if there are only few support vectors. Further, since we now have a scalar product only involving data vectors, we may apply the **kernel trick**.
By maximizing $\alpha$ in the dual problem, we will be able to minimize the primal problem which respect to $w$,$b$.