# GA2: Exploration Vince
## Initial Run
### Network

batch: 32
epochs: 10
### Performance

## From Github
### Network

batch: 32
epochs: 10
## Performance

Training set Accuracy: 0.6513
Training set Loss: 1.0855
Validation set Accuracy: 0.4907
Validation set Loss: 1.7243
**More epochs: 20**

Training set Accuracy: 0.8518
Training set Loss: 0.4523
Validation set Accuracy: 0.4918
Validation set Loss: 2.4849
**We are clearly overfitting now.**
When we try to increase the batch size to 64, we get the following

More squiggly curves
If we increase the batch size even more (128)

Not much difference
**Lets add some dropout (0.4)**

Looks a bit cleaner. Best values found at epoch 12
## Other network found
### Network

batch: 32
epochs: 10
**This network is fucking massive**
## Performance

**Yikes**
Lets remove the dropout.

I guess its better but no clue what happened in epoch 16 :o. Lets keep it at 10 epochs.

I dont really like this model, its not that powerful, yet it has 7 million parameters. Lets skip it for now.
## One more found

batch: 32
epochs: 10
## Performance

Training set Accuracy: 0.7575
Training set Loss: 0.7547
Validation set Accuracy: 0.4498
Validation set Loss: 2.1709
Not to shabby, we see increase in validation loss from epoch 7. We still try to give it more epochs. Now 20 instead of 10

Indeed, we get a major overfit. Lets try to add some dropout to hopefully decrease this overfit a bit

Now its just terrible. Maybe the other model was also this bad because of the dropout, lets remove it.
Lets try to add batchnormalization to the layers

Alright alright, not too bad. Maybe make the dense layer a bit smaller

Does not really make a difference in performance but seriously decreases the amount of parameters.
Now we try to play a bit with the convolutions.

Very similar, mabye worse but the amount of parameters has doubled. So lets not do this.
Next I tried to change the activation function to elu instead of relu. It might be nice if the model can use negative weights?

Generally better scores, more validations over 60 in epochs.




We add some dropout (0.3)

## Our network


Training set Accuracy: 0.9999
Training set Loss: 0.0003
Validation set Accuracy: 0.6317
Validation set Loss: 1.7453