Group9 NCKU E94096097Mu-Cheng,Chen NCKU F44086181Ping-Jung,Yang NITKC Ando Soma # Assignment2-Image Classification Report 1.Ping-Jung,Yang The way I changed the code: I changed the epoch of the training model. By my experience, if the epoch is not increased properly (such as 20 times more), the result would be effected to lower accuracy due to data overfits, so I take 15 epoch for training in order to get more accurate and not make the phnonmenon of overfitting get worse. colab:https://colab.research.google.com/drive/1tiZNHoXae4C3s1YB7UQPJmH6ga_9k9VR?usp=sharing 2.Soma Ando The way I changed the code: I thought that I could improve the accuracy of the AI by increasing the number of training sessions. So, I tried that method. Then, the accuracy of the train started to produce a value of 100% around 17 trials. colab:https://colab.research.google.com/drive/1xDhn7J7ovYoks6sbmpQUxNiiALZMW0hG?usp=sharing 3.Mu-Cheng,Chen - model I first tried to train default model to check how high would the accuracy be.However,after tuning epoch to 300 and try to changing the proporsions of train size and validation size,it occurs that the highest accuracy is about 70%.According to the above,I decided to change model to VGG . VGG: - data standardization and the data augmentation: First when I didn't use data standardization and the data augmentation,the validation accuracy wasn't up to 90%.I use data standardization with means = (0.49, 0.48, 0.45) stds = (0.25, 0.24, 0.26) and data augmentation with transforms.RandomRotation(degrees=10),transforms.RandomHorizontalFlip(p=0.5),transforms.RandomPerspective(distortion_scale=0.3, p=0.5) as my training default,helping the accuracy of validation and test both increase. ![](https://i.imgur.com/IdRjW9J.png) - epoch I use most of time tuning epoch.I had tried 50,100,150,300,400,450,475,500,600,700 epochs to train model,finding that 1 to 80 epoch is a steep decline of loss which was from about 20 to about 1 and accuracy which was from 0% to about 65%,80 to 200 epoch loss significantly decline and accuracy significantly increase,loss was from about 1 to about 0.3,accuracy was from about 65% to about 85%,200 to 300 epoch loss was from about 0.3 to about 0.1 and accuracy was from about 85% to about 90%.After 300 epoch almost reached plateau,accuracy increased and loss declined haltingly. - learning rate I found that the most appropriate learning rate for this model was 1E-6.1E-5 to 1E-8 is the range that training speed and accuracy could be accept.When learning rate was below 1E-8,the pace is too little that loss would be about 10 at 100 epoch.When learning rate was above 1E-5,the accracy couldn't converge. - optimizer I had tried SGD for model optimizer,but the reason why Adam was better is that SGD fluctuate more severely than Adam and couldn't converge.I guess Adam can tune learning rate automatically during training but SGD can't is the reson that SGD hard to converge. - batch_size I had tried batch size smaller than 50 and larger than 50 like 200,250,320,finding that if batch size smaller than 50,accuracy fluctuated severly and trained slower,on the other hand,accuracy with larger batch size not only increased rapidly and smoothly but also trained faster.According to above,larger batch size was better than smaller batch size. The best model of our team: - model:VGG - data splitting:train size 80%,valid size 20% - batch size:50 - optimizer:Adam - learning rate:1E-6 - epochs:700 - training time:about 4 hr ![](https://i.imgur.com/QTQQUod.png) ![](https://i.imgur.com/KFGcrnb.png) colab:https://colab.research.google.com/drive/1F1Kwttb57nUSvZ7eNxUbjk3Pa8R-ueAZ?usp=sharing#scrollTo=gEfROle1leF7 - compare to others I recognize that other teams have resized the images to 224*224 before training,they also try more various model like Resnet50,alexnet,ResNet152 and so on.I think we should study some cats-and-dogs example before implementing our model,also learn more about the model through research paper,not just spent a lot of time to tune parameter and try-and-error.