# GA2: Exploration Jason ## Original ![](https://i.imgur.com/Dx2rVGJ.png) ## Batch size = 256 ![](https://i.imgur.com/4PuSlcJ.png) ## Epochs 50 ![](https://i.imgur.com/15BQp8c.png) ## Add 128 conv layer ![](https://i.imgur.com/qIacdjm.png) Deeper networks require batchnorm. ![](https://i.imgur.com/lg5J1CT.png) ## Add batchnorm to every conv layer ![](https://i.imgur.com/yGVpTC0.png) ## Add 256 conv layer and batchnorm ![](https://i.imgur.com/y3GQdZg.png) This did not make it better, so lets revert this change. ## Decrease learning rate to 0.0001 ![](https://i.imgur.com/VDSU7a3.png) Toch nog eens een layer toevoegen. ![](https://i.imgur.com/lG2P5R8.png) ## Misschien vooraan eens een layer toevoegen van 16 en de 256 opnieuw verwijderen. ![](https://i.imgur.com/9qUW6bn.png) ----------------------------------------------------------- # Vince 67 accuracy als base model nu Model misschien te complex want regularization werkt niet model = Sequential() model.add(Conv2D(32, (3, 3), padding='same', input_shape=(32, 32, 3))) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Conv2D(64, (3, 3), padding='same')) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, (3, 3), padding='same')) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Conv2D(128, (3, 3), padding='same')) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(256, (3, 3), padding='same')) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Conv2D(256, (3, 3), padding='same')) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(128)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Dense(64)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Dense(num_classes)) model.add(Activation('softmax')) ![](https://i.imgur.com/a3VSFbJ.png) Dit geeft relatief cleane curve (behalve de insane drop in de laatste epochs, maar dat is normaal zeiden ze.) ## Minder epochs (25) ![](https://i.imgur.com/6OlKz6R.png) # Verwijder gewoon alle dense layers ![](https://i.imgur.com/0QkHs0q.png) # add 2x512 conv ![](https://i.imgur.com/0LNnu0t.png) # add 2x1024 conv ![](https://i.imgur.com/N09pfBg.png) # Start wider and less deep ![](https://i.imgur.com/hIEsymD.png) # Weer naar zonder dense layers, 2x256, dropout 0.1 op 3 laatste conv layers not good mb ## Vince model ![](https://i.imgur.com/GzGdUmS.png) ## Add maxnorm 4 in dropout layers