owned this note
owned this note
Published
Linked with GitHub
# Generative Models & Generative Adversarial Networks
###### tags: `Deep Learning for Computer Vision`
## From VAE to GAN
We only care about generators when we train VAE, but the generated images may look fake, so we can use the GAN architecture to improve this problem.

## Generative Adversarial Network
* Generator to convert a vector $Z$ (sampled from $P_z$) into fake data $X$ (from $P_G$), while we need $P_G$ = Pdata
* try to generate more realistic images to cheat discriminator
* Discriminator classifies data as real or fake (1/0)
* try to distinguish wheater the image is gerenrated or real

### Objective Function
A **min-max game** on following function
\begin{equation}
\mathcal{L}_{GAN} = E[log(1-D(G(x)))] + E[logD(y)]
\end{equation}
* When we training the D, the G is fixed and we hope $D(G(x))$ is close to 0
* When we training the G, the D is fixed and we hope $D(G(x))$ is close to 1
* We hope $D(y)$ is close 1 because the y is the real image

<img src="https://i.imgur.com/clT4es0.png" width="500"/>

### How to solve Discriminator
<img src="https://i.imgur.com/RaTWz5e.png" width="600"/>
Apply formula of $f'(y) = \dfrac{a}{y}-\dfrac{b}{1-y}$ to $min G$, We can get the **optimal Discrimination**.
### How to solve Generator
<img src="https://i.imgur.com/Wq7CxbE.jpg" width="600"/>
## Deep Convolutional GAN (DC-GAN)
* ICLR 2016
* A CNN+GAN architecture
* Empirically make training of GAN more stable

* Example Results

## Conditional GANs
* ICLR 2016
* Conditional generative model p(x|y) instead of p(x)
* Both G and D take the label y as an additional input

where Y is label vactor like [1,0,0,0,0...]
Given the same input $Z$, just choose a different label $Y$ to generate the corresponding category

For example :
* input real image $x_{male}$ and label $[1, 0]$ , Discriminator should output 1
* input real image $x_{male}$ and label $[0, 1]$ , Discriminator should output 0
* input **fake image** $x_{male}$ and label $[1, 0]$ , Discriminator should output 0
## Problems in Training GANs
### Vanishing Gradients
GAN training is often unstable, so training might not converge properly.
JSD divergence will always be equal to the constant log2 if there is no overlap between the two distributions. which makes the network unable to learn through this loss function.

**Discriminator cannot express how far the generated picture is from the real picture**. That is, there is no way to reflect the quality of the real picture. For example, the third picture has been regarded as an apple, but it is still classified as a fake picture (not an apple)
### Mode Collapse
The generator only outputs a limited number of image variants regardless of the inputs.

In order to avoid being punished, the **Generator** becomes "lazy" and only generates similar samples.
### T-SNE
https://openreview.net/pdf?id=rkmu5b0a-
<img src="https://i.imgur.com/Lv1VcQm.png" width="400"/>
## Energy-Based GAN
In order to avoid Vanishing Gradients. It can solve the traditional GAN only output 1 or 0
* Energy Function
* Converting input data into scalar outputs, viewed as energy values
* Desired configuration is expected to output low energy values & vice versa.
* Energy Function as Discriminator
* Use of autoencoder; can be pre-trained
* **Reconstruction loss (L2-norm, cosin similarity...) outputs a range of values instead of binary logistic loss.**
* Empirically better convergence
<img src="https://i.imgur.com/A8X5K9k.png" width="300"/>
The autoencoder can reconstruct the real data but not the fake data. The idea can be applied to the problem of **Anomaly Detection**.
### Example Results

## MSGAN
In order to avoid Mode Collapse, add one more restriction to make two very similar input $Z$ them output become more difference
* Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis.
* With the goal of producing diverse image outputs.
* To address the **mode collapse** issue by **conditional GANs**.

### Proposed Regularization

### Qualitative results
#### Conditioned on paired images

#### Conditioned on unpaired images

#### Conditioned on text

The input y does not have to be only one hot vector, it can be a text description.
## Style-based GAN
* A Style-Based Generator Architecture for Generative Adversarial Networks (CVPR’19)
* Design style-based generator to achieve high-resolution image synthesis
* No particular designs on loss functions, regularization, and hyper-parameters
### Style-based generator

## SinGAN
**Learning a Generative Model from a Single Natural Image**
* Learning from a **single image**
* Handle multiple image manipulation tasks
* Super-resolution, style conversion, harmonization, image editing, et.

<br>
Change a image into different size, and then use these resize images to train each layer generator.
The bottom generator only input a random vector, and the generator at a higher layer input the output of the previous layer and a random vector.

### Inference Stage for SinGAN

#### Paint-to-Image input image + z

#### Harmonization
SinGAN can solve the problem that the background (style) of two images does not match
For wxample, We can downsampling the picture into a small picture, and then generate a specific style through layers of generators (because these generators are trained to generate pictures of a specific style)

#### Editing

#### Number of scales/levels in SinGAN
Putting downsampling images into different layers of generators will produce different results.
For example, the image generated by the generator at the lower level is more similar with the style of training data; the image generated by the generator at the higher level has less change style.
