---
tags: machine-learning
---
# K-means clustering
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/k-means-clustering/1.png" height="100%" width="70%">
</div>
> This is my personal notes taken for the course [Machine learning](https://www.coursera.org/learn/machine-learning#syllabus) by Standford. Feel free to check the [assignments](https://github.com/3outeille/Coursera-Labs).
> Also, if you want to read my other notes, feel free to check them at my [blog](https://ferdinandmom.engineer/machine-learning/).
>
# I) Introduction
In the clustering problem, we are given a training set ${x^{(1)},...,x^{(m)}}$ ,and want to group the data into a few cohesive \"clusters\". Here, $x^{(i)} \in R^n$ as usual; but no labels $y^{(i)}$ are given. So, this is an unsupervised learning problem. The **k-means clustering algorithm** is as follows:
$$\begin{aligned}
&\text{1. Initialize cluster centroids } \mu_1,\mu_2,...,\mu_k \in \mathbb{R}^n \text{ randomly}
\\
&\text{2. Repeat until convergence : }\end{aligned}$$
$$\begin{aligned}
\text{For every i, set}
\\
&c^{(i)} := \DeclareMathOperator{\argmin}{arg\,min}_j \|x^{(i)} - \mu_j \|^2
\\
\text{For every j, set}
\\
&\mu_j := \frac{\sum_{i=1}^{m}1_{\{c^{(i)} = j\}}x^{(i)}}{\sum_{i=1}^{m}1_{\{c^{(i)} = j\}}}
\\\end{aligned}$$
In the algorithm above, we have:
1. $k$, number of clusters.
2. $c^{(i)}$ is the index of the cluster such that $x^{(i)}$ belongs to it.
3. $\mu_j$ is a **cluster centroid** (center of a cluster).
# II) Algorithm explanation
To initialize the cluster centroids (in step 1 of the algorithm above), we could choose $k$ training examples randomly, and set the cluster centroids to be equal to the values of these $k$ examples. (Other initialization methods are also possible).
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/k-means-clustering/2.png">
<figcaption>Figure 1: 2 cluster centroids equal to 2 training examples</figcaption>
</div>
<br>
The inner-loop is then compose of 2 steps:
1. **Cluster assignment** step (clusters $c^{(i)}$ are updated).
2. **Move cluster** step (cluster centroids $\mu_j$ are updated).
### <ins>1. Cluster assignment</ins>
We assign each training example $x^{(i)}$ to the nearest cluster centroid $\mu_j$. To do so, we need to find the value $j$ that minimizes $\|x^{(i)} - \mu_j \|^2$ (Euclidean distance).
### <ins>2. Move cluster</ins>
Compute the averages for all the points inside each of the cluster centroid groups $c$, then move the cluster centroid points $\mu_j$ to those averages.
---
<ins>**Remark:**</ins>
- $\sum_{i=1}^{m}1_{\{c^{(i)} = j\}} x^{(i)}$ = sum over all the points $x^{(i)}$ that belong to the cluster $c^{(i)}$
- $\sum_{i=1}^{m}1_{\{c^{(i)} = j\}}$ = total number of points $x^{(i)}$ that belong to the cluster $c^{(i)}$ (we will denoted it as $n$).
Thus,
<br>
$\text{For every j, set:}$
$$\begin{aligned}
\mu_j := \frac{\sum_{i=1}^{m}1_{\{c^{(i)} = j\}}x^{(i)}}{\sum_{i=1}^{m}1_{\{c^{(i)} = j\}}}
= \frac{1}{n}\sum_{i=1}^{m}1_{\{c^{(i)} = j\}}x^{(i)}
\end{aligned}$$
# III) K-means algorithm
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/k-means-clustering/3.png">
<figcaption>Figure 2: Training examples are shown as dots, and cluster centroids are shown as crosses</figcaption>
</div>
<br>
- a. Original dataset.
- b. Random initial cluster centroids (in this instance, not chosen to be equal to two training examples).
- (c-f). Illustration of running two iterations of k-means. In each iteration, we assign each training example to the closest cluster centroid (shown by "painting" the training examples the same color as the cluster centroid to which is assigned); then we move each cluster centroid to the mean of the points assigned to it.
Is the k-means algorithm guaranteed to converge? Yes it is, in a certain sense. In particular, let us define the **distortion function** to be:
$$\boxed{
J(c, \mu) = \sum_{i=1}^{m}\|x^{(i)} - \mu_{c^{(i)}}\|^2
}$$
where $\mu_{c^{(i)}}$ is cluster centroid of the cluster to which example $x^{(i)}$ has been assigned.
It can be shown that k-means is exactly **coordinate descent** on $J$.
The distortion function $J$ is a non-convex function, and so coordinate descent on $J$ is not guaranteed to converge to the global minimum. In other words, k-means can be susceptible to local optima. Very often k-means will work fine and come up with very good clusterings despite this. But if you are worried about getting stuck in bad local minima, one common thing to do is run k-means many times (using different random initial values for the cluster centroids $\mu_j$). Then, out of all the different clusterings found, pick the one that gives the lowest distortion $J(c, \mu)$.
<ins>**What is coordinate descent ?**</ins>
**Coordinate descent** is an **optimization algorithm** that find the minimum of a function by minimizing one parameters at the time while holding others fixed.
In the case of **distortion function**, $J(c, \mu)$, we will do the following:
<br>
$\text{Repeat until J converges \{}$
$$\text{1. minimizes J with respect to $c$ while holding $\mu$ fixed.}$$
$$\text{2. minimizes $J$ with respect to $\mu$ while holding $c$ fixed.}$$
$\}$
Thus, $J$ must monotonically decrease, and the value of $J$ must converge. (Usually, this implies that $c$ and $\mu$ will converge too).