---
tags: machine-learning
---
# Recommendation System
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/recommendation-system/1.png" height="100%" width="70%">
</div>
> This is my personal notes taken for the course [Machine learning](https://www.coursera.org/learn/machine-learning#syllabus) by Standford. Feel free to check the [assignments](https://github.com/3outeille/Coursera-Labs).
> Also, if you want to read my other notes, feel free to check them at my [blog](https://ferdinandmom.engineer/machine-learning/).
# I) Introduction
A recommender system or a recommendation system is a subclass of information filtering system that seeks to predict the \"rating\" or \"preference\" a user would give to an item.
<ins>**Problem Formulation: (Predicting Movie Ratings case)**</ins>
Let's define:
- $n_u$ = number of users.
- $n_m$ = number of movies.
- $r(i,j)=1$ if user $j$ has rated movie $i$.
- $y(i,j)$ is the rating given by user $j$ to the movie $i$, defined only if $r(i,j)=1$.
The objective of the recommender system is to use the rated movies by the users to predict the rating a user would give to a non-rated item.
To do so, 2 methods:
1. Content-based filtering.
2. Collaborative filtering.
# II) Content-based filtering
Let's define:
- $x^{(i)}$ = feature vector for movie $i$. (in our case, it is the movie type : romance, action \...)
- $\theta^{(j)}$= parameter vector for user $j$. (user preferences : do they like more romance than action ?)
- $m^{(j)}$ = number of movies rated by user $j$.
The goal of this method is to **predict the user rating on a non-rated movie based on movies characteristics**. For example, when a friend asks you for a book recommendation, it's pretty natural to ask what kinds of books they have read and liked. From there, you could think of a few titles (books characteristics: fantasy, sci-fi \...) that are similar to the things they've read and liked in the past (user past books review).
So, in our case, it means that you need to have beforehand:
1. movies characteristics, $x^{(i)}$.
2. users past movie review, $y$.
With that, you can find **user preference** denoted as $\theta$. Once you have user preference, it becomes easy to predict the user rating on a non-rated movie.
Suppose user $j$ has rated $m^{(j)}$ movies, then learning $\theta^{(j)}$ can be treated as linear regression problem. So, to learn $\theta^{(j)}$,
$$min_{\theta^{(j)}} {1 \over 2} \sum_{i: r(i, j)=1} \left [ (\theta^{(j)})^T (x^{(i)}) - y(i, j)\right ]^2 + {\lambda \over 2} \sum_{k=1}^n (\theta_k^{(j)})^2$$
To get the parameters for all our users, we do the following:
$$\boxed{
min_{\theta^{(1)}, \cdots, \theta^{(n_u)}} {1 \over 2} \sum_{j=1}^{n_u} \sum_{i: r(i, j)=1} \left [(\theta^{(j)})^T (x^{(i)}) - y(i, j)\right ]^2 + {\lambda \over 2} \sum_{j=1}^{n_u} \sum_{k=1}^n (\theta_k^{(j)})^2
}$$
The cost function is then:
$$\boxed{
J(\theta^{(1)}, \cdots, \theta^{(n_u)}) = {1 \over 2} \sum_{j=1}^{n_u} \sum_{i: r(i, j)=1} \left [(\theta^{(j)})^T (x^{(i)}) - y(i, j) \right ]^2 + {\lambda \over 2} \sum_{j=1}^{n_u} \sum_{k=1}^n (\theta_k^{(j)})^2
}$$
The gradient descent update is then:
$$\boxed{
\begin{array}{ll}
\theta_k^{(j)} &= \theta_k^{(j)} - \alpha \left( \sum_{i: r(i, j)=1} \left [(\theta^{(j)})^T x^{(i)} - y(i, j)\right ] x_k^{(i)} \right) \text{, for k = 0 (bias/intercept term)}
\\
\theta_k^{(j)} &= \theta_k^{(j)} - \alpha \left( \sum_{i: r(i, j)=1} \left [(\theta^{(j)})^T x^{(i)} - y(i, j) \right ] x_k^{(i)} + \lambda \theta_k^{(j)}\right) \text{, otherwise }
\end{array}
}$$
<ins>**Remark:**</ins> The effectiveness of content based recommendation depends on identifying the features $x^{(i)}$ properly, which is often not easy.
# III) Collaborative filtering
Collaborative filtering has the intrinsic property of feature learning (it can learn by itself what features to use) which helps overcome drawbacks of content-based recommender systems.
Given the user past movie review $y$ and the parameter vector $\theta$, the algorithm learns the values for the features $x^{(i)}$ by applying linear regression.
$$min_{x^{(i)}} {1 \over 2} \sum_{j:r(i,j)=1} \left[ (\theta^{(j)})^T x^{(i)} - y(i,j) \right]^2 + {\lambda \over 2} \sum_{k=1}^n \left( x_k^{(i)} \right)^2$$
Intuitively this boils down to the scenario where given a movie and its ratings by various users ($y$) and user preferences $\theta$, the collaborative filitering algorithm tries to find the most optimal features to represent the movies such that the squared error between the two is minimized.
Since this is very similar to the linear regression problem, regularization term is introduced to prevent overfitting of the features learnt. Similarly by extending this, it is possible to learn all the features for all the movies $i \in [1, n_m]$. Thus,
$$min_{x^{(1)}, \cdots, x^{(n_m)}} {1 \over 2} \sum_{i=1}^{n_m} \sum_{j:r(i,j)=1} \left[ (\theta^{(j)})^T x^{(i)} - y(i,j) \right]^2 + {\lambda \over 2} \sum_{i=1}^{n_m} \sum_{k=1}^n \left( x_k^{(i)} \right)^2$$
where the gradient descent update on features is:
$$x_k^{(i)} := x_k^{(i)} - \alpha \left( \sum_{j:r(i,j)=1} \left[ (\theta^{(j)})^T x^{(i)} - y(i,j) \right] \theta_k^{(j)} + \lambda x_k^{(i)} \right)$$
But it is also possible to solve for both $\theta$ and $x$ simultaneously, given by an update rule which is nothing but the combination of the earlier two update rules. Thus,
$$\boxed{
\begin{array}{ll}
J(x^{(1)}, \cdots, x^{(n_m)}, \theta^{(1)}, \cdots, \theta^{(n_u)}) &= {1 \over 2} \sum_{(i,j):r(i,j)=1} \left [(\theta^{(j)})^T x^{(i)} - y(i, j)\right ]^2
\\
&+ {\lambda \over 2} \sum_{i=1}^{n_m} \sum_{k=1}^{n} (x_k^{(i)})^2
\\
&+ {\lambda \over 2} \sum_{j=1}^{n_u} \sum_{k=1}^n (\theta_k^{(j)})^2
\end{array}
}$$
$\sum_{(i,j):r(i,j)=1}$ is equivalent to looping through all the data where $r(i,j)=1$.
And the minimization objective can be written as,
$$\boxed{min_{x^{(1)}, \cdots, x^{(n_m)}, \theta^{(1)}, \cdots, \theta^{(n_u)}} J(x^{(1)}, \cdots, x^{(n_m)}, \theta^{(1)}, \cdots, \theta^{(n_u)})}$$
<ins>**Remark:**</ins> Because the algorithm can learn feature by itself, the bias units where $x_0=1$ and $\theta_0=1$ have been removed, therefore $x \in \mathbb{R}^n$ and $\theta \in \mathbb{R}^n$.
To summarize, the collaborative filtering algorithm has the following
steps:
1. Initialize $x^{(1)}, \cdots, x^{(n_m)}, \theta^{(1)}, \cdots, \theta^{(n_u)}$ to small random values.
2. Minimize $J(x^{(1)}, \cdots, x^{(n_m)}, \theta^{(1)}, \cdots, \theta^{(n_u)})$ using gradient descent or any other advance optimization algorithm. The update rules given below can be obtained by following the partial derivatives along $x$s and $\theta$s. $$\boxed{
\begin{array}{ll}
x_k^{(i)} &= x_k^{(i)} - \alpha \left( \sum_{j:r(i,j)=1} \left [(\theta^{(j)})^T x^{(i)} - y(i,j) \right ] \theta_k^{(j)} + \lambda x_k^{(i)} \right) \\
\theta_k^{(j)} &= \theta_k^{(j)} - \alpha \left( \sum_{i: r(i, j)=1} \left[(\theta^{(j)})^T x^{(i)} - y(i, j)\right ] x_k^{(i)} + \lambda \theta_k^{(j)}\right)
\end{array}
}$$
3. For a user with parameter $\theta$ and a movie with learned features $x$, the predicted star rating is given by $\theta^Tx$.
Consequently, the matrix of all predicted ratings of all movies by all users $Y$ can be written as:
$$Y = \left[
\begin{matrix}
(\theta^{(1)})^T x^{(1)} & (\theta^{(2)})^T x^{(1)} & \cdots & (\theta^{(n_u)})^T x^{(1)} \\
(\theta^{(1)})^T x^{(2)} & (\theta^{(2)})^T x^{(2)} & \cdots & (\theta^{(n_u)})^T x^{(2)} \\
\vdots & \vdots & \ddots & \vdots \\
(\theta^{(1)})^T x^{(n_m)} & (\theta^{(2)})^T x^{(n_m)} & \cdots & (\theta^{(n_u)})^T x^{(n_m)} \\
\end{matrix}
\right ]$$
where $y(i,j)$ is the rating for movie $i$ by user $j$.
# IV) Implementation detail
How to handle the case where a user has not rated any movies ?
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/recommendation-system/2.png">
</div>
<br>
The term in our cost function is ${1 \over 2} \sum_{(i,j):r(i,j)=1} \left[(\theta^{(j)})^T x^{(i)} - y^{(i, j)}\right ]^2 = 0$ because the summation applies only if the user has rated a movie. Thus,
$$J(x^{(1)}, \cdots, x^{(n_m)}, \theta^{(1)}, \cdots, \theta^{(n_u)}) = {\lambda \over 2} \sum_{i=1}^{n_m} \sum_{k=1}^{n} (x_k^{(i)})^2 + {\lambda \over 2} \sum_{j=1}^{n_u} \sum_{k=1}^n (\theta_k^{(j)})^2$$
When minimizing our cost function, we will find $\theta^{(5)}$ equal to the 0 vector because the only term to pull away our regularization term on theta from 0 is equal to 0 (See above). Thus, when it comes to predict movies for Eve, it will all be equal to 0 ($\theta^{(5)}x^{(i)} = 0$) which does not seem intuitively correct.
To prevent this, we will do **mean normalization**. Here is an example:
$$Y =
\begin{bmatrix}
5 & 5 & 0 & 0 \\
4 & ? & ? & 0\\
0 & 0 & 5 & 4\\
0 & 0 & 5 & 0 \\
\end{bmatrix} \, \quad
\mu =
\begin{bmatrix}
2.5 \\
2 \\
2.25 \\
1.25 \\
\end{bmatrix}$$
where $\mu_j = \frac{\sum_{j:r(i,j)=1}Y_{i,j}}{\sum_{j}r^{(i,j)}}$
Then, let's define $Y' = Y - \mu$
$$Y' =
\begin{bmatrix}
2.5 & 2.5 & -2.5 & -2.5 \\
2 & ? & ? & -2 \\
-.2.25 & -2.25 & 3.75 & 1.25 \\
-1.25 & -1.25 & 3.75 & -1.25
\end{bmatrix}$$
Which means that in our cost function, we need to put $\theta^{(j)T}x^{(i)} + \mu_i$ without the summation condition $j:r(i,j)=1$ instead of $\theta^{(j)T}x^{(i)}$
# V) Content-based vs Collaborative filtering
**Content-based** recommendation engine works with existing profiles of users. A profile has information about a user and their taste. Taste is based on user rating for different items. Generally, whenever a user creates his profile, Recommendation engine does a user survey to get initial information about the user in order to avoid new user problem.
In the recommendation process, the engine compares the items that are already positively rated by the user with the items he didn't rate and looks for similarities. Items similar to the positively rated ones will be recommended to the user. Here, based on user’s taste and behavior a content-based model can be built by recommending articles relevant to user’s taste. This model is efficient and personalized yet it lacks something.
Let us understand this with an example. Assume there are four categories of news:
- Politics
- Sports
- Entertainment
- Technology
and there is a user A who has read articles related to Technology and Politics. The content-based recommendation engine will only recommend articles related to these categories and may never recommend anything in other categories as the user never viewed those articles before.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/recommendation-system/3.png" width="50%">
</div>
<br>
This problem can be solved using another variant of recommendation algorithm known as Collaborative Filtering.
The idea of **collaborative filtering** is finding users in a community that share appreciations. If two users have same or almost same rated items in common, then they have similar taste. Such users build a group or a so-called neighborhood. A user gets recommendations for those items that user hasn't rated before but was positively rated by users in his/her neighborhood.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/recommendation-system/4.png">
</div>
<br>
Collaborative filtering has basically 2 approaches:
1. <ins>**User Based Approach**:</ins> In this approach, Items that are recommended to a user are based on an evaluation of items by users of the same neighborhood, with whom he/she shares common preferences. If the article was positively rated by the community, it will be recommended to the user. In the user-based approach, articles which are already rated by a user, play an important role in searching for a group that shares appreciations with him/her.
2. <ins>**Item Based Approach**:</ins> Referring to the fact that the taste of users remains constant or change very slightly, similar articles build neighborhoods based on appreciations of users. Afterwards, the system generates recommendations with articles in the neighborhood that a user might prefer.
<div style="text-align:center">
<img src="https://raw.githubusercontent.com/valoxe/image-storage-1/master/blog-machine-learning/recommendation-system/5.png">
</div>
<br>
Let's try to understand above picture. Let's say there are three users A, B and C.
1. In **user-based approach**, user A and C are similar because both of them like Strawberry and Watermelon. Now user A likes Grapes and Orange too. So user-based approach will recommend Grapes and Orange to user C.
2. In **item-based approach**, Grapes and Watermelon will form the similar items neighborhood which means irrespective of users, different items which are similar will form a neighborhood. So when user C likes Watermelon, the other item from the same neighborhood i.e Grapes will be recommended by item-based approach.