{%hackmd theme-dark %}
# GAN Project References :kiwifruit:
###### tags: `gan` `project`
[GAN Project Codes :herb:](/8AZt1e-FQ1SQ-mujbghStg)
## Research structure
```mermaid
graph LR
gan(GAN)
pgan(Progressive GAN)
sgan(StyleGAN)
sgan2(StyleGAN2)
sgan2ada(StyleGAN2-ADA)
sgan3(StyleGAN3)
gif(GIF)
classDef approach fill:cyan,stroke:black
class gan,pgan,sgan,sgan2,sgan2ada,sgan3,gif approach
gan-->pgan
subgraph Progressive GANs
pgan-->sgan-->sgan2-->sgan2ada-->sgan3
end
sgan3-->gif
sgan-->gif
gan-->bgan(BigGAN)
gan-->cgan(CycleGAN)
gan-->gpgan(GP-GAN)
linkStyle 0,1,2,3,4 stroke:green
linkStyle 5 stroke:red,stroke-dasharray:3
```
:::info
The dotted red link is what we try to implement
:::
## Main reference papers
### Original GAN (2014)
>the original paper on GANs
- Paper: [Generative Adversarial Nets (arXiv)](https://arxiv.org/abs/1406.2661)
### ProGAN (2017)
>Progressive GAN 中 "Progressive" 指的是 Training 的過程中 Generator 會從低解析度逐漸 Upsample 到高解析度
:::spoiler

:::
- Paper: [Progressive Growing of GANs for Improved Quality, Stability, and Variation (ArXiv)](https://arxiv.org/abs/1710.10196)
- [Video](https://youtu.be/G06dEcZ-QTg)
- [TensorFlow implementation](https://github.com/tkarras/progressive_growing_of_gans)
- [CelebA-HQ dataset](https://github.com/tkarras/progressive_growing_of_gans#preparing-datasets-for-training)
### StyleGAN (2018)
>a generator structure proposed by NVIDIA for better controllability
- Paper: [A Style-Based Generator Architecture for Generative Adversarial Networks (arXiv)](https://arxiv.org/abs/1812.04948)
- [Video](https://youtu.be/kSLJriaOumA)
- [TensorFlow implementation](https://github.com/NVlabs/stylegan)
- [FFHQ dataset](https://github.com/NVlabs/ffhq-dataset)
### StyleGAN2 (2019)
>improved version of stylegan
- Paper: [Analyzing and Improving the Image Quality of StyleGAN (arXiv)](https://arxiv.org/abs/1912.04958)
- [Video](https://youtu.be/c-NJtV9Jvp0)
- [TensorFlow implementation](https://github.com/NVlabs/stylegan2)
### StyleGAN2-ADA (2020)
>ADA stands for "adaptive", which uses [invertible data augmentation](https://en.wikipedia.org/wiki/Generative_adversarial_network#Invertible_data_augmentation) on top of stylegan2. Making it more sutible for limited training data
- Paper: [Training Generative Adversarial Networks with Limited Data](https://arxiv.org/abs/2006.06676)
- [PyTorch implementation](https://github.com/NVlabs/stylegan2-ada-pytorch)
- [MetFaces dataset](https://github.com/NVlabs/metfaces-dataset)
### StyleGAN3 (2021)
>the state-of-the-art stylegan
- [Project page](https://nvlabs.github.io/stylegan3/)
- Paper: [Alias-Free Generative Adversarial Networks (nvlabs)](https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf)
- [Alias-Free Generative Adversarial Networks (arXiv)](https://arxiv.org/abs/2106.12423)
- [PyTorch implementation](https://github.com/NVlabs/stylegan3)
:::success
This explains them really clearly: [StyleGAN vs StyleGAN2 vs StyleGAN2-ADA vs StyleGAN3](https://medium.com/@steinsfu/stylegan-vs-stylegan2-vs-stylegan2-ada-vs-stylegan3-c5e201329c8a)
:::
### GIF (2020)
>a model based on stylegan that matches our goal really nicely, but it is complicated with limited examples online
- Paper: [GIF: Generative Interpretable Faces](https://arxiv.org/abs/2009.00149)