# GA3: Iterative improvement CONV ## Baseline: W=4, E=10, B=64, LR=0.01 ![](https://i.imgur.com/8cxTz5c.png) loss: 30.4623 - mae: 30.4623 - mse: 2720.6941 - val_loss: 31.4966 - val_mae: 31.4966 - val_mse: 2742.7012 ## Scuffed graph -> increase epochs and decrease learning rate ![](https://i.imgur.com/ZhUYfHs.png) loss: 30.0877 - mae: 30.0877 - mse: 2670.2458 - val_loss: 31.4501 - val_mae: 31.4501 - val_mse: 2721.9082 ## add dense layer (mandatory) of 8 ![](https://i.imgur.com/G1mw4uv.png) loss: 30.0827 - mae: 30.0827 - mse: 2654.0505 - val_loss: 31.5951 - val_mae: 31.5951 - val_mse: 2827.8494 ## Increase epochs 40, Window size to 12 ![](https://i.imgur.com/bWsCpJb.png) loss: 29.9569 - mae: 29.9569 - mse: 2611.9231 - val_loss: 31.3053 - val_mae: 31.3053 - val_mse: 2706.4980 ## Window size to 24\*7 (a week) ![](https://i.imgur.com/3L7Wwkp.png) loss: 29.5888 - mae: 29.5888 - mse: 2535.5056 - val_loss: 31.3419 - val_mae: 31.3419 - val_mse: 2612.5759 ## Window size to 24\*7\*2 (two weeks) ![](https://i.imgur.com/1n99qGB.png) loss: 29.1403 - mae: 29.1403 - mse: 2441.6990 - val_loss: 31.5576 - val_mae: 31.5576 - val_mse: 2639.4238 ## Choose 1 week (smaller gap, two weeks gives no improvement) ## Kernel size increaed to 5. ![](https://i.imgur.com/EwhTq5a.png) loss: 28.9623 - mae: 28.9623 - mse: 2430.8958 - val_loss: 31.7951 - val_mae: 31.7951 - val_mse: 2678.9651 ## Add conv layer before maxpool ![](https://i.imgur.com/FxO1VQ1.png) loss: 28.9756 - mae: 28.9756 - mse: 2419.3940 - val_loss: 31.4359 - val_mae: 31.4359 - val_mse: 2660.1919 ## Anotha one ![](https://i.imgur.com/EN3QJ2n.png) loss: 28.2762 - mae: 28.2762 - mse: 2326.3718 - val_loss: 32.0295 - val_mae: 32.0295 - val_mse: 2641.2019 ## increase to 64 filters ![](https://i.imgur.com/iXCk3v7.png) loss: 28.6907 - mae: 28.6907 - mse: 2399.2932 - val_loss: 31.3968 - val_mae: 31.3968 - val_mse: 2688.0378 ---------------------------------------------------------- **TODO:** Swish in conv model **TODO:** Increase kernel size once more in conv model