---
tags: machine-learning
---
# Naive Bayes & Gaussian Discriminant Analysis
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/naive-bayes-gaussian-discriminant-analysis/1.png" height="100%" width="70%">
</div>
> This is my personal notes taken for the course [Machine learning](https://www.coursera.org/learn/machine-learning#syllabus) by Standford. Feel free to check the [assignments](https://github.com/3outeille/Coursera-Labs).
> Also, if you want to read my other notes, feel free to check them at my [blog](https://ferdinandmom.engineer/machine-learning/).
# I) Anomaly Detection System
Consider a set of points, $\{x^{(1)}, x^{(2)}, \cdots, x^{(m)}\}$ in a training example (represented by blue points) representing the regular distribution of features $x_1^{(i)}$ and $x_2^{(i)}$. The aim of anomaly detection is to separate anomalies from the test set (represented by the red points) based on distribution of features in the training example (represented by the blue points). For example, in the plot below, while point A is not an outlier, point B and C in the test set can be considered to be **anomalous (or outliers)**.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/naive-bayes-gaussian-discriminant-analysis/2.png">
</div>
<br>
Formally, in anomaly detection the $m$ training examples are considered to be normal or non-anomalous, and then the algorithm must decide if the next example, $x_{test}$ is anomalous or not. So given the training set, it must come up with a model $p(x)$ that gives the probability of a sample being normal (high probability is normal, low probability is anomaly).Resulting decision boundary is defined by,
$$\boxed{
\begin{array}{ll}
p(x_{test}) & < \epsilon \text{, flag as outlier or anomaly} \\
p(x_{text}) & \geq \epsilon \text{, flag as normal or non-anomalous}
\end{array}
}$$
Some of the popular applications of anomaly detection are:
- **Fraud Detection**: A observation set $x_{(i)}$ would represent user $i$'s activities. Model $p(x)$ is trained on the data from various users and unusual users are identified, by checking which have $p(x(i))< \epsilon$.
- **Manufacturing**: Based on features of products produced on a production line, one can identify the ones with outlier characteristics for quality control and other such preventive measures.
- **Monitoring Systems in a Data Center**: Based on characteristics of a machine behaviour such as CPU load, memory usage etc. it is possible to identify the anomalous machines and prevent failure of nodes in a data-center and initiate diagnostic measures for maximum up-time.
# II) Gaussian distribution
**Gaussian distribution** is also called Normal Distribution.
If $x \in \mathbb{R}$, and $x$ follows Gaussian distribution with mean $\mu$ and variance $\sigma^2$, denoted as, $x \sim \mathcal{N}(\mu, \sigma^2)$ (The little or 'tilde' can be read as \"distributed as\").
A **Standard normal Gaussian distribution** is a bell-shaped probability distribution curve with mean $\mu = 0$ and standard deviation $\sigma=1$, as shown in the plot below.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/naive-bayes-gaussian-discriminant-analysis/3.png">
</div>
<br>
The parameters $\mu$ and $\sigma$ signify the centring and spread of the gaussian curve as marked in the plot above. It can also be seen that the density is higher around the mean and reduces rapidly as distance from mean increases.
The probability of $x$ in a gaussian distribution, $N(\mu,\sigma^2)$ is given by,
$$\boxed{p(x;\mu, \sigma^2) = {1 \over \sqrt{2\pi} \sigma} exp(- \frac {(x - \mu)^2} {2\sigma^2}}$$
where,
- $\mu$ is the mean.
- $\sigma$ is the standard deviation ($\sigma^2$ is the variance).
The effect of mean and standard deviation on a Gaussian plot can be seen clearly in figure below.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/naive-bayes-gaussian-discriminant-analysis/4.png">
</div>
<br>
It can be noticed that, while mean $\mu$ defines the centering of the distribution, the standard deviation $\sigma$, defines the spread of the distribution. Also, as the spread increases the height of the plot decreases, because the total area under a probability distribution should always integrate to the value 1.
Given a dataset, as in the previous section, $\{x^{(1)}, x^{(2)}, \cdots, x^{(m)}\}$, it is possible to determine the approximate (or the most fitting) gaussian distribution by using the following **parameter estimation**,
$$\boxed{
\begin{array}{ll}
\mu &= {1 \over m} \sum_{i=1}^m x^{(i)}
\\
\sigma^2 &= {1 \over m} \sum_{i=1}^m (x^{(i)} - \mu)^2
\end{array}
}$$
# III) Density Estimation Algorithm
Given a training set of examples, $\lbrace x^{(1)}, \dots ,x^{(m)}\rbrace$ where each example is a vector, $x \in \mathbb{R}^n$.
$$p(x) = p(x_1;\mu_1,\sigma_1^2)p(x_2;\mu_2,\sigma^2_2)\cdots p(x_n;\mu_n,\sigma^2_n)$$
<ins>**Remark:**</ins>
The features $\lbrace x_1, \dots, x_n \rbrace$ are independent of each other.
More compactly, the above expression can be written as follows:
$$\boxed{\displaystyle \prod^n_{j=1} p(x_j;\mu_j,\sigma_j^2)}$$
<ins>**To summarize:**</ins>
1. Choose features $x_i$ that are indicative of anomalous behaviour (general properties that define an instance).
2. Fit parameters $\mu_1, \cdots, \mu_n, \sigma_1^2, \cdots, \sigma_n^2$ given by,
$$\boxed{
\begin{array}{ll}
\mu &= {1 \over m} \sum_{i=1}^m x^{(i)}
\\
\sigma^2 &= {1 \over m} \sum_{i=1}^m (x^{(i)} - \mu)^2
\end{array}
}$$
3. Given a new example $x$, compute $p(x)$:
$$\boxed{p(x) = \displaystyle \prod^n_{j=1} p(x_j;\mu_j,\sigma_j^2) = \prod\limits^n_{j=1} \dfrac{1}{\sqrt{2\pi}\sigma_j}exp(-\dfrac{(x_j - \mu_j)^2}{2\sigma^2_j})}$$
4. **Anomaly** if $p(x) < \epsilon$
# IV) Evaluation of Anomaly Detection System
Single real-valued evaluation metrics would help in considering or rejecting a choice for improvement of an anomaly detection system.
In order to evaluate an anomaly detection system, it is important to have a labeled dataset (similar to a supervised learning algorithm). This dataset would generally be skewed with a high number of normal cases. In order to evaluate the algorithm follow the steps ($y=0$ is normal and $y=1$ is anomalous):
1. Split the examples with $y=0$ into 60%-20%-20% (training-CV-test set).
2. Split the examples with $y=1$ into 50%-50% (CV-test set).
3. Perform **density estimation algorithm** on the train set.
4. Check the performance on the cross-validation set to find out metrics like true positive, true negative, false positive, false negative, precision/recall, f1-score. **Accuracy score would not be a valid metric because in most cases the classes would be highly skewed**.
5. Following this, the value of $\epsilon$ can be altered on the cross-validation set to improved the desired metric in the previous step.
6. The evaluation of the final model on the held-out test set would give a unbiased picture of how the model performs.
# V) Anomaly Detection vs Supervised Learning
A natural question arises, "If we have labeled data, why not used a supervised learning algorithm like logistic regression or SVM?".
Use **anomaly detection** when\...
1. We have a **very small number of positive examples** ($y=1$ \... 0-20 examples is common) and a **large number of negative examples** ($y=0$).
2. We have many different \"types\" of anomalies and it is hard for any algorithm to learn from positive examples what the anomalies look like; future anomalies may look nothing like any of the anomalous examples we've seen so far.
Use **supervised learning** when\...
1. We have a **large number of both positive and negative examples**. In other words, the training set is more evenly divided into classes.
2. We have enough positive examples for the algorithm to get a sense of what new positives examples look like. The future positive examples are likely similar to the ones in the training set.
# VI) Feature Engineering
Feature engineering (or choosing the features which should be used) has a great deal of effect on the performance of an anomaly detection algorithm.
Indeed, since the algorithm tries to fit a Gaussian distribution through the dataset, it is always helpful if the the histogram of the data fed to the density estimation looks similar to a Gaussian bell shape. If the data is not in-line with the shape of a Gaussian bell curve, sometimes a transformation can help bring the feature closer to a Gaussian approximation.
Some of the popular transforms used are,
1. $log(x)$
2. $log(x + c)$
3. $\sqrt{x}$
4. $x^{\frac{1}{3}}$
5. Look at the anomaly that the algorithm is failing to flag, and see if that inspires you to create some new feature, so that with this new feature it becomes easier to distinguish the anomalies from your good examples.
# VII) Multivariate Gaussian Distribution
The density estimation seen earlier had the underlying assumption that the features are independent of each other. While the assumption simplifies the analysis, there are various downsides to the assumption as well.
Consider the data as shown in the plot below. It can be seen clearly that there is some correlation (negative correlation to be exact) among the two features.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/naive-bayes-gaussian-discriminant-analysis/5.png">
</div>
<br>
Univariate Gaussian distribution ("Normal distribution" or "bell curve") applied to this data results in the following countour plot. Because while the two features are negatively correlated, the contour plot do not show any such dependency. On the contrary, if multivariate gaussian distribution is applied to the same data one can point out the correlation. Seeing the difference, it is also clear that the chances of test sets (red points) being marked as normal is lower in multivariate Gaussian than in the other.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/naive-bayes-gaussian-discriminant-analysis/6.png">
</div>
<br>
So, **mutlivariate gaussian** distribution basically helps model $p(x)$ in one go, unlike **univariate gaussian** that models individual features $\{x_1,x_2, \cdots,x_n\}$ in $x$.
The multivariate gaussian distribution is given by,
$$\boxed{p(x; \mu, \Sigma) = \frac {1} {(2\pi)^{n/2} \, |\Sigma|^{1/2}} exp\left(-{1 \over 2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right)}$$
where,
1. $\mu \in \mathbb{R}$ and $\Sigma \in \mathbb{R}^{n \times n}$ are the parameters of the distribution.
2. $|\Sigma|$ is the determinant of the matrix $\Sigma$ (the covariance matrix).
The density estimation for multivariate gaussian distribution can be done using the following 2 formulas:
$$\boxed{
\begin{array}{ll}
\mu = {1 \over m} \sum_{i=1}^m x^{(i)}
\\
\Sigma = {1 \over m} \sum_{i=1}^m (x^{(i)} - \mu) (x^{(i)} - \mu)^T
\end{array}
}$$
<ins>**Steps in multivariate density estimation:**</ins>
1. Given a train dataset, estimate the parameters $\mu$ and $\Sigma$.
2. For a new example $x$, compute $p(x)$.
3. Flag as anomaly if $p(x)< \epsilon$.
The covariance matrix is the term that brings in the major difference between the univariate and the multivariate gaussian. The effect of covariance matrix and mean shifting can be seen in the plots below.
A covariance matrix is always symmetric about the main diagonal.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/naive-bayes-gaussian-discriminant-analysis/7.png">
</div>
<br>
1. The mean shifts the center of the distribution.
2. Diagonal elements are the spread of the distribution along corresponding features (also called the variance).
3. Off-diagonal elements are the correlation among the various features.
Also, the original model $p(x)$ in univariate gaussian is a special case of the multivariate gaussian distribution where the off-diagonal elements of the covariance matrix are constrained to be zero (contours are axis aligned).
# VII) Univariate vs Multivariate Gaussian Distribution
Also, the original model $p(x)$ in univariate gaussian is a special case of the multivariate gaussian distribution where the off-diagonal elements of the covariance matrix are constrained to be zero (contours are axis aligned).
<ins>**Univariate vs Multivariate Gaussian Distribution:**</ins>
1. Univariate model can be used when the features are manually created to capture the anomalies and the features take unusual combinations of values. Whereas multivariate gaussian can be used when the correlation between features is to be captured as well.
2. Univariate model is computationally cheaper and hence scales well to the larger dataset $(m=10,000 \sim 100,000)$, whereas the multivariate model is computationally expensive, majorly because of the term $\Sigma^{-1}$.
3. Univariate model works well for smaller value of $m$ as well. For multivariate model, must have $m>n$, or else $\Sigma$ is singular and hence not invertible.
4. Generally, multivariate gaussian is used when $m$ is much bigger than $n$, like $m > 10n$, because $\Sigma$ is a fairly large matrix with around $\frac{n}{2}$ parameters, which would be learnt better in a setting with larger $m$.
<ins>**Remark:**</ins> A matrix might be singular because of the presence of redundant features, i.e. two features are linearly dependent or a feature is a linear combination of a set of other features. Such matrices are non-invertible.