# GA3: Iterative improvement CONV ## Baseline: W=4, E=10, B=64, LR=0.01  loss: 30.4623 - mae: 30.4623 - mse: 2720.6941 - val_loss: 31.4966 - val_mae: 31.4966 - val_mse: 2742.7012 ## Scuffed graph -> increase epochs and decrease learning rate  loss: 30.0877 - mae: 30.0877 - mse: 2670.2458 - val_loss: 31.4501 - val_mae: 31.4501 - val_mse: 2721.9082 ## add dense layer (mandatory) of 8  loss: 30.0827 - mae: 30.0827 - mse: 2654.0505 - val_loss: 31.5951 - val_mae: 31.5951 - val_mse: 2827.8494 ## Increase epochs 40, Window size to 12  loss: 29.9569 - mae: 29.9569 - mse: 2611.9231 - val_loss: 31.3053 - val_mae: 31.3053 - val_mse: 2706.4980 ## Window size to 24\*7 (a week)  loss: 29.5888 - mae: 29.5888 - mse: 2535.5056 - val_loss: 31.3419 - val_mae: 31.3419 - val_mse: 2612.5759 ## Window size to 24\*7\*2 (two weeks)  loss: 29.1403 - mae: 29.1403 - mse: 2441.6990 - val_loss: 31.5576 - val_mae: 31.5576 - val_mse: 2639.4238 ## Choose 1 week (smaller gap, two weeks gives no improvement) ## Kernel size increaed to 5.  loss: 28.9623 - mae: 28.9623 - mse: 2430.8958 - val_loss: 31.7951 - val_mae: 31.7951 - val_mse: 2678.9651 ## Add conv layer before maxpool  loss: 28.9756 - mae: 28.9756 - mse: 2419.3940 - val_loss: 31.4359 - val_mae: 31.4359 - val_mse: 2660.1919 ## Anotha one  loss: 28.2762 - mae: 28.2762 - mse: 2326.3718 - val_loss: 32.0295 - val_mae: 32.0295 - val_mse: 2641.2019 ## increase to 64 filters  loss: 28.6907 - mae: 28.6907 - mse: 2399.2932 - val_loss: 31.3968 - val_mae: 31.3968 - val_mse: 2688.0378 ---------------------------------------------------------- **TODO:** Swish in conv model **TODO:** Increase kernel size once more in conv model
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up