![](https://i.imgur.com/FICBHvt.png) ![](https://i.imgur.com/evE3GlG.png) [**PRACE AUTUMN SCHOOL 2021**](https://events.prace-ri.eu/event/1188/) [**INTRODUCTION TO DEEP LEARNING**](https://hackmd.io/@pdl/Skn7I48MY) # NOTEBOOKS EXERCISES ## Exercise 2 (02-tf2-mnist-mlp) ### Using MNIST data | Submitter | Model description | Test accuracy | | --------- | ----------------- | ------------- | | Markus | original mlp_model | 95.54% | | Markus | mlp_model, epochs=20 | 96.01% | | Sreeram | mlp_model, epochs=20 with 3 dense layer with units 100,50 and 20. dropout = 0.2 | 97.41% | | Neha | mlp_model, epochs=10, Dense layer1 =100, layer2 = 50 and dropout=0.2 | 98.02%| | Markus | example answer two-layer model | 97.30% | |A Submitter|0.21 Dropout rate|97.17%| |Rajendra| 0.1 Dropout rate |97.17%| |Charles| layer1=160 layer2=120 dropout=0.5 |98.25%| |Murali | 3 dense layers 150, 100, 50, dropout (0.2,0.2,0.1) | 98.30%| ## Exercise 3 (03-tf2-mnist-cnn) | Submitter | Model description | Test accuracy | | --------- | ----------------- | ------------- | | Markus | original cnn_model | 98.36% | | Markus | original cnn_model, 10 epochs |98.74% | | Rajendra|first layer 3, 3 second 2, 3 0.01 Dropout rate, Epoc 5 |0.9919| |Pekka|2 layers, Adam, 30 epochs, batch 128|0.9933| |Rajendra| first layer 3, 3 second 2, 3 0.01 Dropout rate, Epoc 11 |0.9964| | ~ilja | 1. C2D 32, (7x7); 2. C2D 62 (3x3); MaxPool 2x2 ; Drop 0.2; Flat; Dense 256; Dropout 0.5; 20 epochs | 99.39% | | Neha | 2 dense layers= 128,100,25 epochs | 99.08% | | Sreeram | 2 dense layers= 128,64,25 epochs | 98.72% | | Murali | 2 conv2d layers, 10 epochs, 32 batch size | 99.30% | | Markus | example answer better_cnn_model; batch_size 128 | 98.95% | | Febrian | Task 1 parameters, batch_size 32 | 99.23% | |Dr. Leipa| batch size 24| 99.14%| https://paperswithcode.com/sota/image-classification-on-mnist : 99.87% ## Exercise 4 (04-tf2-imdb-rnn) | Submitter | Model description | Test accuracy | | --------- | ------------------------------------ | ------------- | | Mats | original RNN (single LSTM), 5 epochs | 82.78% | |Rajendra| first layer 32, second 16, 0.01 Dropout rate, Epoc 5 |83.06 %| | Mats | single CNN layer (optional/tf2-imdb-cnn) | 88.28% | | Neha | single RNN layer | 80.05% | | Sreeram | 2 RNN layer | 78.40% | | Markus | example answer rnn_model_with_two_lstm_layers | 83.62% | | Mats | original RNN (two bi-directional LSTMs), 5 epochs | 83.32% | | Febrian | original RNN (one bi-directional LSTMs), 15 epochs, 32 batch_size | 81.45% |