# Food Recognition CNN
## Project Content
- Convolutional Neural Net using PyTorch.
- Multiclass classification model.
- Convolutional layers.
- ReLU activation functions.
- Max Pooling Layers.
- Densely Connected Layer at the end for classification with dropout layers.
- Using different pretrained models
- Ensembling pretrained models
- Apply a face-detector to clean data of humans, also detector of cats and dogs.
# Algorithm Results
## VGG16
### Experiments
#### Exp. 1
**Transformations**: Random Resized Crop, Horizontal Flip,
**Cost Function**: Negative Log Likelihood
**Optimiser**: Adam
**Mini Batch Size:*** 128
**Dropout:** 0.5
**Learning-Rate:** 0.001
Achieved accuracy of around 44.570%
https://neurohive.io/en/popular-networks/vgg16/
https://colab.research.google.com/drive/1tIfWg7Ip_qwbfspaOJSouWGDqgakT4JD#scrollTo=7oWDRz5WY_fB
#### Exp. 2
**Transformations**: Horizontal Flip, Vertical Flip, RandomRotation
**Cost Function**: Cross Entropy
**Optimiser**: SGD
**Mini Batch Size:*** 240
**Dropout:** 0.1
**Learning-Rate:** 0.002
**Epochs:** 30
Accuracy: 49.875%
<hr>
## VGG19
### Experiments
#### Exp. 1
Loss function : NLLLoss
Optimizer : Adam
epochs = 10
training_transforms
[transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
validation_transforms
[transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
Accuracy 41%
#### Exp. 2
Loss function : CrossEntropyLoss
Optimizer : SGD
epochs = 10
training_transforms
[transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
validation_transforms
[transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
Accuracy 49%
#### Exp. 3
Loss function : CrossEntropyLoss
Optimizer : SGD
epochs = 30
training_transforms = transforms.Compose([transforms.RandomRotation(degrees=90),
transforms.Resize(256),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
Accuracy (close to 49.1, not tested)
#### Exp. 4
Loss function : CrossEntropyLoss
Optimizer : SGD lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9
batch size = 32
transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
epochs = 30
Accuracy 60%
<hr>
## VGG19_BN
### Experiments
#### Exp. 1
Loss function : CrossEntropyLoss
Optimizer : SGD
epochs = 30
training_transforms = transforms.Compose([transforms.RandomRotation(degrees=90),
transforms.Resize(256),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
Acuuracy (close to 43.6, not tested)
### InceptionV3
### ResNet101
### Experiments
#### Exp. 1
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
epochs = 10
batch size = 32
Accuracy 60.9%
### AlexNet
currently do not have enough GPU.
<hr>
## Xception
### Experiments
#### Exp. 1
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
batch_size = 32
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
epochs = 15 (not 100% sure)
Accuracy 62.7%
### InceptionV4
### Experiments
#### Exp. 1
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
batch_size = 32
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
epochs = 10
Accuracy 62.1%
### InceptionResNetV2
### Experiments
#### Exp. 1
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
batch size = 32
epochs = 10
Accuracy 62.5%
<hr>
### ResNeXt101_32x4d
### Experiments
#### Exp. 1
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
batch size = 32
epochs = 10
Accuracy (62.5 - 63.6, not tested)
#### Exp. 2
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
criterion = nn.CrossEntropyLoss()
optimizer = adabound.AdaBound(model.parameters(), lr=1e-3, final_lr=0.1)
batch size = 32
epochs = 10
Accuracy ( around 49% stopped in 8 epoch, took 9 hours, too much time)
<hr>
### ResNeXt101_64x4d
### Experiments
#### Exp. 1
### SE-ResNeXt101_32x4d
### Experiments
#### Exp. 1
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(299),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
batch size = 32
epochs = 10
Accuracy 65.9%
#### Exp. 2
training_transforms = transforms.Compose([transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(45),
transforms.RandomAffine(45),
transforms.ColorJitter(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
!! the lr is too small lr = 0.001 (scheduler won't do any change probably)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience = 5) (did not implement in the training)
batch size = 64
epoch = 10
Accuracy 65.2 %
#### Exp. 3
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
criterion = nn.CrossEntropyLoss()
epochs = 10
batch size = 64
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.last_linear.parameters())), lr=0.001, momentum=0.9)
model.last_linear = nn.Linear(num_ftrs, 80)
Accuracy 40 %
#### Exp. 4
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(299),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
criterion = nn.CrossEntropyLoss()
epochs = 10
batch size = 64
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
model.last_linear = nn.Linear(num_ftrs, 80)
Accuracy 65.3 %
## ResNet152
#### Exp. 1
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
epochs: 20
learning rate= 0.001
Accuracy: 63.007%
#### Exp. 2
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
model.fc = nn.Linear(num_ftrs, 80)
#model = model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.fc.parameters())), lr=0.001, momentum=0.9)
epochs: 10
learning rate= 0.001
batch size = 64
Accuracy (40% probably)
#### Exp. 3
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
batch size = 64
model.fc = nn.Linear(num_ftrs, 80)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(list(filter(lambda p: p.requires_grad, model.parameters())), lr=0.001, momentum=0.9)
exp_lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
epochs = 20
Accuracy (64,1 % not tested)
### ResNet50
#### Exp1
- No layers frozen, whole network is trained, last layer fully connected to 80 outputs.
```
training_transforms = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(45),
transforms.RandomAffine(45),
transforms.ColorJitter(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
validation_transforms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
```
Epochs: 10
Accuracy: 40%
learning rate: 0.001
optimiser: SGD
batch_size: 64
### SE_ResNext50
#### exp1
training_transforms = transforms.Compose([transforms.Resize(360),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(90),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
validation_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
batchsize: 64
epochs: 15
accuracy: 40.9
#### exp2
same as exp1 but fully trained model with last linear fully connected layer.
epochs: 15
accuracy:
# Tips for optimization
- Different Transforms for training and validation data (data augmentation)
- Batch size from 16 to 128
- Optimizers (with or without scheduler)
- Freezing Layers
- - This will freeze all the layers:
for param in model.parameters():
param.requires_grad = False
- - The basic idea is that all models have a function model.children() which returns it’s layers. Within each layer, there are parameters (or weights), which can be obtained using .param() on any children (i.e. layer). Now, every parameter has an attribute called requires_grad which is by default True. True means it will be backpropagrated and hence to freeze a layer you need to set requires_grad to False for all parameters of a layer. This can be done like this -
- - model_ft = models.resnet50(pretrained=True)
ct = 0
for child in model_ft.children():
ct += 1
if ct < 7:
for param in child.parameters():
param.requires_grad = False
- - This freezes layers 1-6 in the total 10 layers of Resnet50
- - To freeze the first 1-7 layers:
ct = 0
for name, child in model_conv.named_children():
ct += 1
if ct < 7:
for name2, params in child.named_parameters():
params.requires_grad = False
- Epochs
# Ensemble Learning
## General Idea
Different pretrained models are trained (transfer learning) these output different probability distributions over the different classes. We assign a relevant class based on the highest probability given by one of our models.
### Approach
- Create predictions CSV file using each one of the models, this contains class probabilities for the top-5 classes, and has a prediction for each image inside of the test set (n image rows).
- Create smaller dataset where class predictions for each model are compared and then put into some logic for maximum value selection and average selection classes.
- Turn this into a final class allocation and then use that to make predictions.
- Normalise probabilities.
### Log
vgg16_pretrained:
- topk80=works, 0-1 values
vgg16_pretrained_exp1.pt
- topk = 80 works, 0-1 values
inceptionResentv2_Exp1_pretrained.pt:
- topk=80 works, large values
inc15_exp2.pt:
- topk=65 works, large values
inceptionv4_Exp1_pretrained.pt:
- topk = 46 works, large values
newvgg19_pretrained.pt:
- topk = 66 works, large values
resnet101_Exp1_pretrained.pt:
- doesn't work with this method
seresnet_Exp1_pretrained.pt:
- topk = 63 works, large values
seresnet_Exp2_pretrained.pt:
- topk = 65 works, large values
xception_Exp1_pretrained.pt:
- topk = 47, works, large values
Testing:
- Take values from probability list, and normalise the values using `normalized = (x-min(x))/(max(x)-min(x))`
-
Maximum ensembling:
- [ ] create a pandas dataframe with columns 1 - 80 representing classes, and one column for each image.
- [ ] Need to
- [ ] Need to normalise all the probabilities
## models which work
**These all have 80 layers fully connected at the end.**
vgg16_pretrained
se_resnext50_ex2
#### Ensemble
SE_
Average ensembling:
# Really Useful Links
https://github.com/Herick-Asmani/Food-101-classification-using-ResNet-50
https://paperswithcode.com/sota/image-classification-on-imagenet
https://github.com/facebookresearch/ResNeXt
https://medium.com/@14prakash/understanding-and-implementing-architectures-of-resnet-and-resnext-for-state-of-the-art-image-cf51669e1624
https://towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035
https://towardsdatascience.com/review-senet-squeeze-and-excitation-network-winner-of-ilsvrc-2017-image-classification-a887b98b2883
http://www.image-net.org/challenges/LSVRC/
# Grading
## Innovation
- [ ] Use Ensemble Models
- [ ] Transfer Learning
## Experiments and setup
- [x] Validation split 80/20
- [x] Using different batch sizes
- [x] Using different types of gradient descent (Stochastic, Batch, Minibatch)
- [x] Tuning learning rate
- [x] Data Augmentation and transformation
- [x] Using different loss functions
## Analysis
## Pitch and poster design
- [ ] Images of different algorithm representation e.g. VGG16, ResNet
- [ ] Diagram showing what we did
- [ ] Key Results Table
- [ ] Images of different data transformations
- [ ] Plotting the learning rate/curve of our algorithms
- [ ] Project Description
- [ ] Problem Statement
- [ ] Methodology Section
- [ ] Conclusion Section
- [ ] Limitations Section
# Todo
### Niki
- [ ] Find template for the poster in Powerpoint.
- [ ] Come up with ideas for structure of the poster, (e.g. Introduction, Methodology, Results, Conclusion etc. See above section for ideas on this).
- [ ] Write draft text for different sections (we will also help with this).
### George
- [x] Train vgg19 model
- [x] Check with batch size 1 for OOM issue
- [x] Train vgg19_bn model
- [x] Try models without custom classifier
- [ ] fix problem with custom classifier (if custom classifier is in SGD optimizer as paraemeter then training accuracy is always 0.001)
- [ ] optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
- [ ] optimizer = optim.Adam([var1, var2], lr=0.0001)
- [x] Train inception-v4 model
- [x] Train resnet models
- [ ] Train alexnet model
- [ ] https://towardsdatascience.com/a-bunch-of-tips-and-tricks-for-training-deep-neural-networks-3ca24c31ddc8
- [ ] check pdf for more finetuning
### Sam
- [x] Train VGG16 model
- [x] Refactor code for the networks.
- [x] Parametrise inception-v3 algorithm and record results.
- [x] Research way to do ensembling of models.
- [ ] Ensemble test with two models.
- [ ] Ensemble test with all models.
# Links
## Tutorials
https://towardsdatascience.com/how-to-train-an-image-classifier-in-pytorch-and-use-it-to-perform-basic-inference-on-single-images-99465a1e9bf5
https://github.com/LeanManager/PyTorch_Image_Classifier/blob/master/Image_Classifier_Project.ipynb
https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html
https://pytorch.org/docs/stable/torchvision/models.html
https://ruder.io/transfer-learning/
https://cs231n.github.io/transfer-learning/
Decrement the learning rate
Zero the gradients
Carry out the forward training pass
Calculate the loss
Do backward propagation and update the weights with the optimizer
## relevant papers
### Object detection liu et al.
- localising instances of a specific object, or generalising to detecing object categories.
- categories: object classification, generic object detection (bounding box), semantic segmentation, object instance segmentation

- intraclass variations will be a challenge for us.
- use Average Precision as performance measure (derived from precision and recall), other measures include, Average Recall, True Positives, False Positives, IOU Threshold.
- Two main approaches on how to do detection: Two stage (includes a preprocessing step for generating object proposals: Mask RCNN), One-stage (region proposal free frameworks: YOLO and SSD these usually have poorer performance detecting small objects).

- we can augment the data by producing samples of rotated examples, this way we introduce robustness for intraclass variations of this type. There are some approaches for this also such as Spatial Transformer Network (rotation invariance).
- Bounding box object proposal methods: DeepProposal, RPN
- Object segment proposal methods: DeepMask
# references
SOFTMAX
https://medium.com/data-science-bootcamp/understand-the-softmax-function-in-minutes-f3a59641e86d