This is my personal notes taken for the course Machine learning by Standford. Feel free to check the assignments.
Also, if you want to read my other notes, feel free to check them at my blog.
In classification your algorithms produce discrete outcomes: one/zero, yes/no, do/don't and so on.
Spam versus non-spam emails is a traditional example of classification task: you want to predict if the email fed to your program is spammy or not, where usually 0 means not spam and 1 means spam.
Formally, we want to predict a variable , where 0 is called negative class, while 1 is called positive class. Such task is known as binary classification.
Other classification problems might require more than a binary output, for example where . Such task is known as a multiclass classification.
Instead of our output vector being a continuous range of values, it will only be 0 or 1
() where 0 is usually taken as the "negative class"* and 1 as the "positive class".
We can solve binary classification problem by using 2 methods : Linear regression and Logistic regression.
Logistic regression is a more performant algorithm used in classification problems. The most important feature is its ability to produce a sort of hypothesis function of this kind:
The hypothesis function is slightly different from the one used in linear regression. For logistic regression,
which is the traditional hypothesis function processed by a new function g, defined as:
It is called sigmoid function (logistic function) and looks like the picture 2:
In a sigmoid function, as the input goes to , the output approaches to 0; as goes to +, the output approaches to 1. This works like an output limiter which makes the hypothesis function bound between 0 and 1.
Thus, the hypothesis function is:
The logistic regression's hypothesis function outputs a number between 0 and 1. You can think of it as the estimated probability that on a new example in input.
Let's take the bananas classification task. Our goal is to determine whether or not the banana is ripe. For simplicity, let's consider that we are given only the size () and the "yellowness"
() of the banana. So my feature vector is defined as:
It's now time to define whether a new picture of a banana given in input to my algorithm is ripe or not. I grab its feature vector and plug it into the hypothesis function, which magically produces some outcome, let's say:
The hypothesis is telling me that for the new banana in input the probability that y=1 is 0.9. In other words, the new banana in the picture has chance of being ripe. Let me write it more formally and generally:
In words, the hypothesis function tells you the probability that given parametrized by .
Since the outcome y is restricted between two values 0 or 1, I can compute the probability that as well. Let's figure out the following statement:
Thus,
Let's find the probability of being 0 (unripe banana):
In other words, the new banana has chance of being unripe.
Binary output
So far I've talked about percentages: where is the binary output? It's very simple to obtain. We already know that the hypothesis function returns values in range [0,1].
Let's make the following assumption:
That's kind of intuitive. Now, when exactly is above or below ? If you look at the generic sigmoid function above (figure 1), you'll notice that whenever .
Since the hypothesis function for logistic regression is defined as , I can clearly state that whenever (because ).
Same reasoning goes for .
Decision boundary
Suppose you have a training set as in figure 2 below. Two features (size of a banana) and (yellowness of a banana) and a bunch of items scattered around the plane.
Crosses represent unripe banana whereas circle represent ripe one.
The task of the logistic regression algorithm is to separate those points, by drawing a line between them. That line is technically called decision boundary and it is used to infer about the dataset:
Everything that is below the decision boundary belongs to category A and the remaining part belongs to category B.
The decision boundary is a line, hence it can be described by an equation. As in linear regression, the logistic regression algorithm will be able to find the best s parameters in order to make the decision boundary actually separate the data points correctly.
Suppose that we know the hypothesis function:
and through some magical procedures we end up with the following parameters:
We know since the beginning that and for the current example which is, not surprisingly, the equation of a line (the decision boundary). We also know from the previous paragraph that we can predict y=1 whenever . Thus, we can state that:
And for our specific example:
If you draw the equation you will obtain the decision boundary line in figure 2 above.
In words, given the current training set, pick a new example from the real world with features , , a banana with a certain size and a certain level of yellowness.
It will be classified as 1 ( i.e. ripe) whenever it satisfies the inequation , that is whenever it lies above () the decision boundary. It will be classified as 0 ( i.e. unripe) otherwise.
Here the power of the sigmoid function emerges. It does not matter how far the new example lies from the decision boundary:
In the real world you will find oddly-shaped decision boundaries, as a simple straight line doesn't always separate things well. Those shapes are usually described by polynomial equations.
Logistic regression has no built-in ability to define the decision boundary's equation. Usually it's good practice to look at data and figure out the appropriate shape to implement: that's the so-called data exploration.
You might remember the original cost function used in linear regression (If you don't, check out my post). I can tell you right now that it's not going to work here with logistic regression.
Let me go back for a minute to the cost function we used in linear regression:
which can be rewritten in a slightly different way:
Now let's make it more general by defining a new function:
Thus,
However we know that the linear regression cost function cannot be used in logistic regression problems. So what is this all about?
Well, it turns out for logistic regression, we just have to find a different function Cost, while the summation part stays the same.
For logistic regression, the cost function is defined as:
The indexes have been removed for clarity.
When , the output (i.e. the cost to pay) approaches to 0 as approaches to 1. Conversely, the cost to pay grows to infinity as approaches to 0. You can clearly see it in the figure 3 below-left side.
This is a desirable property: we want a bigger penalty as the algorithm predicts something far away from the actual value. If the label is but the algorithm predicts , the outcome is completely wrong.
Conversely, the same intuition applies when , depicted in the figure 3 below-right side. Bigger penalties when the label is but the algorithm predicts .
What we have just seen is the verbose version of the function Cost for logistic regression. We can make it more compact into a one-line expression: this will help avoiding boring if/else statements when converting the formula into an algorithm.
Proof:
Try to replace y with 0 and 1 and you will end up with the two pieces of the original function.
With the optimization in place, the logistic regression cost function
can be rewritten as:
We now have the hypothesis function and the cost function. It's now time to find the best values for s parameters in the cost function, or in other words, to minimize the cost function by running the gradient descent algorithm.
Apply gradient descent:
General derivation formula of cost function:
Surprisingly, it looks identical to what we were doing for the multivariate linear regression. What's changed however is the definition of the hypothesis :
In linear regression, you are trying to find the best fit line to the plotting data.
In logistic regression, you are trying to separate the plotting data into categories. We are asking ourselves: What is the probability of the point belonging to the "positive class".
Let's compute .
First calculate derivative of sigmoid function (it will be useful while finding partial derivative of ):
Now we are ready to find out resulting partial derivative:
Let's say you want a learning algorithm that tags your emails as "friend", "spam", "work".
Since there is more than 2 outputs classes, we are no longer in a binary classification problem.
In a multi-class classification problem, we are going to isolate each time one class (the "positive class") from the other ("negative class"). It's like solving a binary classification problem for each class. Here is an example below: