# Good Catcher Reference ###### tags: `working` ## Training VAE - [Keras - Variational AutoEncoder](https://keras.io/examples/generative/vae/) - Use KL terms from this. - [Less pain, more gain: A simple method for VAE training with less of that KL-vanishing agony](https://www.microsoft.com/en-us/research/blog/less-pain-more-gain-a-simple-method-for-vae-training-with-less-of-that-kl-vanishing-agony/) - [處理 KL Vanishing](https://zhuanlan.zhihu.com/p/64071467) 1. KL Cost Annealing 2. Free Bits ```python epsilon = 5. kl_loss = tf.clip_by_value(kl_loss, clip_value_min=epsilon) ``` 3. 為了讓 recontruction 足夠依賴 latent variable,可以讓 decoder 弱一點 ## VAE generate blurry images - [[D] Why are images created by GAN sharper than images by VAE?](https://www.reddit.com/r/MachineLearning/comments/9t712f/dwhy_are_images_created_by_gan_sharper_than/e8u8xrz?utm_source=share&utm_medium=web2x&context=3) - 大部分人使用 VAE 的 reconstruction,都直接用 encoder 產生的 mean 當作 z,餵給 decoder。實際上,如果拿 sample 出來的 z 產生的 reconstruction 是非常 noisy 的。 - Possible Solutions - PixelVAE - Variational Lossy Autoender - IAF