Convolutional Neural Network with (Slow)
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
In this post, we are going to see how to implement a Convolutional Neural Network using only Numpy.
The main goal here is not only to give a boilerplate code but rather to have an in-depth explanation of the underlying mechanisms through illustrations, especially during the backward propagation where things get trickier.
However, some knowledge about Convolutional Neural Networks building blocs are required.
To see the full implementation, please refer to my repository.
For the more advanced, here is another post where we implement a faster CNN using im2col/col2im methods.
Also, if you want to read some of my blog posts, feel free to check them at my blog.
I) Architecture
We are going to implement the LeNet-5 architecture.
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
II) Forward propagation
- An image is of shape (h, w, c) where:
- h: image height.
- w: image width.
- c: channel.
- Here is an RGB image (3 channels):
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
1) Convolutional layer
- A convolution operation is defined as follow:
- In theory, a convolution operation is a cross-correlation operation with its kernels flipped by 180°.
- In practice, we don't really care whether or not a convolution or a cross-correlation was used since the main goal is to learn the kernels (Indeed, if you were to learn for example, english irregular verbs. It doesn't matter if you learn them from top to bottom or bottom to top. The most important thing is you learn them !).
- However, if you decide to use a convolution, you will have to apply a rotation during the forward and backward propagation.
- For your information, Pytorch nn.Conv2d() uses cross-correlation.
- To make it easier in this blog post, convolution will refer to cross-correlation.
- We are going to explain:
- How to get the output shape after a convolution.
- How the content of the output is generated after a convolution.
- At the begining of our architecture, we want to make a convolution between an input of shape (32,32,1) and 6 kernels of shape (5,5,1).
- In order to perform a convolution operation, both Input and Kernel must have the exact same number of channels (in our case 1).
- The output will be a new image of shape (28,28,6).
- Since we have here 6 kernels, we are going to perform 6 convolution operations which will produce 6 outputs (feature maps).
- The 6 feature maps will then be stacked to form the 6 channels of the new image.
- During the convolution operation, the following formula is used to get the (28,28) shape:
- O: Output shape.
- I: Input shape.
- p: padding = 0 (default value).
- K: Kernel shape.
- s: stride = 1 (default value).
- : floor function.
- Using the formula above for our example, we have:
- Now that we know where does the output shape come from, let's see how the content of the output image is generated.
- During the convolution operation, the kernel is sliding over the whole input.
- In our following example, we perform a convolution between a (5,5,3) input and 1 kernel of size (3,3,3) to get an (3,3,1) image.
- At each slide, we perform an element-wise multiplication and sum everything to get a single value.
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
- If we perform a convolution between an (5,5,3) input and 6 kernels of shape (3,3,3). We will have to repeat the convolution operation 6 times on each of the (3,3,3) kernel.
- This will result in an output of shape (3,3,6).
- Here is an implementation of what we have seen so far.
2) Pooling layer
- Our architecture use average pooling layer. Another common pooling layer is the max pooling layer.
- The goal of the average pooling is to reduce the height and width of our image but not the number of channels by using a stride > 1.
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
- Here is the implementation.
3) Multilayer perceptron
- To go from an output of convolutional/pooling layers to a MLP part, we have to flatten the output.
- For example, the last convolutional layer gives an output of shape (5,5,16).
- After flattening it, we get a (5x5x16) = (400).
- Then, we just perform a weighted sum.
- For more information about it, please refer to my other blog post about forward propagation in MLP.
- Here is the implementation.
III) Backward propagation
Here comes the tricky part. Most of the tutorials I have read so far only say that the backward propagation is the same as in a MLP (which is true). However, we will see that it's not that straightforward to implement especially at the convolution layer.
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
1) Multilayer perceptron
- To compute the loss gradient, we first need to compute the errors .
- The error is computed differently when you are at:
- The last layer of the network (softmax level).
- Every other layers .
- For more details, please refer to one of my blog post.
-
At the last layer , the formula to compute the error is:
- : error at last layer.
- : activation function output at last layer.
- : ground truth label.
-
At every other layers , the formula to compute the error is:
- : error at layer .
- : Weight matrix at layer .
- : error at layer .
- : derivative of activation function at layer .
-
We can then compute the loss gradient at each layer with the following formula:
-
This will then be used to update your weigths:
-
Here is the implementation:
2) Pooling layer
- During forward propagation, we were averaging value of the input within the pooling window size.
- During backward propagation, we need to proportionally back-propagate the error to the input.
- Remember, no weights gradient are computed here! We only compute the layer gradient.
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
- Here is the implementation:
3) Convolutional layer
- Let's come back to the part of the architecture where we have to perform a convolution between an (14,14,6) input and 16 kernels of shape (5,5,6).
- This will output us an (10,10,16) image.
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
- The idea of backward propagation is to back-propagate the gradient from lower layers to upper layers.
- In order to perform backward propagation in the example above, we have to do 2 things:
- a) Compute the layer gradients at layer (10, 10, 16).
- b) Compute the kernel gradients at layer (10,10,16).
- The optimizer (SGD, Adam, RMSprop) will then update the kernels value.
- In the following, I will be using 2 formulas. For more details, feel free to take a look at this blog post and at this note.
a) Compute Layer gradient
- The formula to compute the layer gradient is:
- : Input gradient.
- : Kernels.
- : Output gradient.
- : Convolution operation.
- However, there is a little problem when we actually want to implement it.
- The formula asks us to perform a convolution operation between 16 kernels of shape (5,5,6) and of shape (10, 10, 16).
- But we know that in order to perform a convolution operation, both arguments of the convolution operation need to have the same exact number of channels which is not the case here (6 != 16).
- During forward propagation, in order to get the (10,10, 16) output, 16 convolutions were performed between the input (14,14,6) and 16 kernels of shape (5,5,6).
- During backward propagation, the 16 channels (feature maps) of the (10, 10) output now contain the gradient that need to be back-propagate to the input layer (14, 14, 6).
- Thus, we need to "broadcast" the gradient in each feature map of the (10,10) to its associate filter which will then be used to compute the input (14,14,6).
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
- As you can see the sliding of kernels over the (14,14,6) input is in fact a convolution ! It was less obvious to notice it though.
a) Compute Kernel gradient
- The formula to compute the kernel gradient is:
- : Kernels gradient.
- : Input image.
- : Output gradient.
- : Convolution operation.
- Same problem than before, performing a convolution is again not straightforward because of channels mismatch (6 != 16).
- During forward propagation, we perform convolution between the input (14,14,6) and 16 kernels of shape (5,5,6) which output us an (10,10,16) image.
- Thus, each feature map were made by a convolution between the input and each kernel.
- Then, during backward propagation, each feature map of the output contains the gradient that needs to be back-propagate to each filter.
- It makes sense that we need to "broadcast" the gradient in each feature map of the (10,10) to each "slide" we did during forward propagation over the input and add it to its associate filter.
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
- Here is the implementation for the above steps:
- At this point of the post, I hope you now understand how to build a Convolutional Neural Network from scratch in a naive way.
- However, the naive implementation takes a lot of time to train (mainly due to the nested for loops).
- As an example, it takes around 4 hours to perform a single epoch on the MNIST dataset.
- In the following post, we are going to see how to implement a faster CNN using im2col/col2im methods.