# Project 03 - Object classification using Neural networks
## Authors
| Name | Student ID | Role |
|:-------------:|:-----------:|:------:|
| La Thanh Thai | 18127207 | Leader |
| Hy Phu Quyen | 18127195 | Member |
| Huynh Duc Le | 18127126 | Member |
**Course**: Introduction to AI (**CSC14003**) - 18CLC6
**FIT @ VNU-HCMUS**
## Assignment - Contribution
| Name | Tasks | Completion Rate |
| -------- | -------- | -------- |
| La Thanh Thai | Train, Test model, Report, Make video | 100% |
| Hy Phu Quyen | Research, Train, Test model, Report, Make video | 100% |
| Huynh Duc Le | Train, Test model, Report, Make video | 100% |
## Environment Setup
* Project was built on `python 3.7.3 ` with IDE `Visual Studio 2019`.
* Model was trained with `Google Collab` on `Windows`.
**Necessary libraries**
`pip install opencv-python`
`pip install numpy`
`pip install tensorflow`
`pip install np_utils`
**HOWTO RUN?**
- **Step 1:** Put images in the folder DATA/TEST/xx.png, where xx is the number following format below (less than 25 images)
`1.png,2.png,...`
- **Step 2:** Run `main.py` - it will display some statistics of our model and label all images in the above folder.
## Abstract
- We tried different models in `Reference` section but those models underperformed and gave us the inaccurate results with the high loss rate.
- In the end, we decided to use the model from [this site](https://github.com/khanguyen1207/My-Machine-Learning-Corner/blob/master/Zalando%20MNIST/fashion.ipynb).
- Table below summarises about model architecture we use:

## Introduction
- Implement an artificial neural network, either traditional networks or deep networks, for object classification.
- Image classification with Fashion-MNIST dataset is the `"hello world"` program to approach deep learning in AI field.
- Plan:
+ Prepare knowledge about simple CNN.
+ Select some models to solve the problem.
+ Test all models has been selected, we would receive the corresponding accuracy depending on which model we used.
+ Based on above accuracy and our test images, we will pick out the model having the best performance.
- Show your understandings on the analysis of experimental results. The experimental results should include tables and figures demonstrating the training loss, accuracies, successful and/or failure cases.
- Examine the effects of hyperparameters and parameters of the neural network in consideration to its performance and accuracy.
## Background/Related Work
- Top 5 powerful CNN Architectures which laid the foundation of today’s Computer Vision achievements [\[0\]][0]:
```
+ LeNet-5
+ AlexNet
+ VGGNet
+ GoogLeNet
+ ResNet
```
[0]: https://medium.com/datadriveninvestor/five-powerful-cnn-architectures-b939c9ddd57b
- We tried to apply ResNet to solve this problem but after a long time :clock1: about **2 hours**, we got the big **BUG** and the result is not far better than the given result from the simple CNN.
- Because the problem is not completely complex and the reason above, we decided to use the simple network: variant of `LeNet-5` architecture.
## Directory
* 18127126_18127195_18127207
* SOURCE
* DataController.py
* Model.py
* main.py
* DATA
* Fashion MNIST (Train)
* ... (\*.png)
* TEST (Test)
* ... (\*.png)
* Report.pdf
## Youtube [link](https://youtu.be/B1aVqWvLAcw)
Note: Remember to turn on "subtitle".
## Approach
**Basic CNN architecture**:
> Definition: A CNN consists of one or more convolutional layers, often with a subsampling layer, which are followed by one or more fully connected layers as in a standard neural network. [\[1\]][1]
> The design of a CNN is motivated by the discovery of a visual mechanism, the visual cortex, in the brain. The visual cortex contains a lot of cells that are responsible for detecting light in small, overlapping sub-regions of the visual field, which are called receptive fields. These cells act as local filters over the input space, and the more complex cells have larger receptive fields. The convolution layer in a CNN performs the function that is performed by the cells in the visual cortex. [\[1\]][1]

*Typical block diagram of a CNN* [\[1\]][1]
[1]: https://ip.cadence.com/uploads/901/cnn_wp-pdf
The Convolutional Layer
: Extracting local features from the input excluding 'noises'.
: Using `kernel` to extract local features (`kernel size` depending on your choice). The convolution filter kernel weights are automatically updated while training.
: **Why needing more Convolutional Layers?**
* The first convolution layer extracts Low-level features like edges, lines, and corners.
* The following higher level convolution layers would extract High-level features, the interpretation or classification of a scene as a whole [\[2\]][2].
[2]: https://stackoverflow.com/questions/26590705/difference-between-low-level-and-high-level-feature-detection-extraction
The Subsampling Layer
: After getting the features, subsampling layer would reduce (or remove) the noise by extracting key feature. It reduces the resolution of the features.
: Two main subsampling methods are: `average pooling` & `max pooling`.

*Pictorial representation of max pooling and average pooling* [\[1\]][1]
The Fully-Connected and Output Layer
: Used for classification.
: Determined during the training process.
: It has function to signal distinct identification of likely features on each hidden layer.
: Each unit of this layer has activation function: taking the results of the convolution/pooling process and using them to classify the images based on each corresponding label.
: **Why needing the non-linear activation function?**
* Input to networks is usually linear transformation (input * weight), but real world and problems are non-linear. [\[3\]][3]
* Non-linearity is needed in activation functions because its aim in a neural network is to produce a nonlinear decision boundary via non-linear combinations of the weight and inputs. [\[3\]][3]
[3]: https://stackoverflow.com/questions/9782071/why-must-a-nonlinear-activation-function-be-used-in-a-backpropagation-neural-net
**Look from the Mathemathical view**
- Choosing activation function `ReLU(Rectified Linear Unit)` because it is a linear function so it's allow the network to converge quickly, the graph below describes the function.
<p align="center">
<img width="460" height="300" src="https://i.imgur.com/iQ8Ok6q.png">
</p>
* **Drawbacks**:
* Formular for the function is $f(x) = max(0, x)$ and this function has no derivative for 0, all values smaller than 0 will become 0 and cannot move to next step, this called "Dying ReLU".
- `Softmax` funtion: used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes. [\[4\]][4]
[4]: https://en.wikipedia.org/wiki/Softmax_function
- `Backpropagation`: This algorithm is applied for modifying the parameters, such as weights (Fully-Connected Layer) and kernels (Convolutional Layer), then repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the differences between the actual output vectors of the network and the desired output vectors.
## Experiment
- We seperated program into 3 modules:
```
Model: Construct the model
DataController: Load dataset and images to test
main: Operate program
```
- Dataset taken from: https://github.com/zalandoresearch/fashion-mnist
- `Tensorflow` support all functions to perform this project, support all layer to build model.
- Function is used in progress to build model:

+ The usage of 3 preprocessing functions: Avoid overfiting (the images in test dataset is completely different from those which is in the train one)
+ Depending on experiments of [this site](https://www.kaggle.com/cdeotte/how-to-choose-cnn-architecture-mnist "Kaggle") \[5\] to choose hyperparameters.
+ Function to add Convolutional Layer 2D to model: 32 filters for the first Layer and 64 filters for the following Layers.
+ Function to add Dropout: Tricky one to prevent our network from overfitting, thus help our network generalize better.
+ Function to add Fully-Connected: It is necessary to flatten pictures to 1 Dimension after feature extracting, first `Dense 128` neurals indentify features and `Dense 10` to classify class.
- Hyperparameters: A model hyperparameter is able to be changed mannually depending on experiments and help estimate model parameters.
**Example: learning-rate,size of kernels,activation function,...**
- Parameters: Model parameters are key feature to machine learning algorithms. It can not change manually by user. It will be modified while training.
**Example: weight, values of kernels,...**
- Cross-Validation techniques was used to assess good model or not:
**40 epochs**

**50 epochs**

- It was overfitting in the first 20 first epochs when model is not stable and after that it is more stable.


- We can see that it converged to 0.89. That meand parameters (weight,...) has converged.
- Result of predict:
+ Dataset test:

+ Our own images`from internet`:

## Conclusion
- Achievements: Know how to use TensorFlow and related libraries to make model, OpenCV for image processing, successfuly built a model that has good performance.
- Not reach yet: Can not predict all of the inputs, for example, the model could predict wrong random test picture on the internet.
- Improvement: Data training the model (need more datas for wiser knowlegde), algorithm to increase the accuracy.
## References
**For report**
\[0\] [Top 5 CNN architectures](https://medium.com/datadriveninvestor/five-powerful-cnn-architectures-b939c9ddd57b)
\[1\] [Using Convolutional Neural Networks for Image Recognition](https://ip.cadence.com/uploads/901/cnn_wp-pdf "Cadence")
\[2\] [Distinguish between high and low level features](https://stackoverflow.com/questions/26590705/difference-between-low-level-and-high-level-feature-detection-extraction)
\[3\] [Nonlinear activation function](https://stackoverflow.com/questions/9782071/why-must-a-nonlinear-activation-function-be-used-in-a-backpropagation-neural-net)
\[4\] [Softmax function](https://en.wikipedia.org/wiki/Softmax_function)
[7 Types of Neural Network Activation Functions: How to Choose?](https://missinglink.ai/guides/neural-network-concepts/7-types-neural-network-activation-functions-right/ "MissingLinkAI")
[Rectifier Linear function and Vanishing Gradient Problem](https://labs.septeni-technology.jp/technote/ml-16-rectifier-linear-function-and-vanishing-gradient-problem/ "Septeni-tech")
[CS231n- Convolutional Neural Networks for Visual Recognition](https://cs231n.github.io/neural-networks-1/ "GitHub")
[Deep Learning CNN for Fashion-MNIST Clothing Classification](https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-fashion-mnist-clothing-classification/ "MachineLearningMastery")
**For research**
\[5\] [Kaggle - How to choose CNN Architecture MNIST?](https://www.kaggle.com/cdeotte/how-to-choose-cnn-architecture-mnist "Kaggle")
[Youtube (Tensoflow) - ML Zero to Hero](https://www.youtube.com/watch?v=u2TjZzNuly8 "Youtube")
[Tensoflow - Training own images](https://stackoverflow.com/questions/37340129/tensorflow-training-on-my-own-image "StackOverflow")
[Codelabs - Introduction to convolutions](https://codelabs.developers.google.com/codelabs/tensorflow-lab3-convolutions/ "Codelabs")
[Datacamp - Convolutional Neural Networks with TensorFlow](https://www.datacamp.com/community/tutorials/cnn-tensorflow-python "Datacamp")
[Differences between parameters and hyperparameters](https://machinelearningmastery.com/difference-between-a-parameter-and-a-hyperparameter/)
[Tensoflow - Fashion-MNIST Example 1](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/keras/datasets/fashion_mnist.py "GitHub")
[Tensoflow - Fashion-MNIST Example 2](https://github.com/ashmeet13/FashionMNIST-CNN/blob/master/FashionMNIST.ipynb "GitHub")