# Bibliography on Deep Fake Detection
###### tags: `bibliography`
## Surveys:
### 1. [Adversarial Attacks Against Deep Generative Models on Data: A Survey](https://ieeexplore.ieee.org/document/9627776)
#### Attack Types:
##### Levels:
1. Data Level:
1. Membership Inference Attacks: whether a given sample belongs to the model’s training set
2. Model Inversion Attacks: Reconstruct some or all of the training data based on the some prior information and the model’s output.
2. Attribute Level:
1. Attribute Inference Attacks: Attempt to infer the sensitive attributes of data
3. Model Level:
1. Model Extraction Attacks: Duplicate the entire trained model.
----
##### Methodologies:
1. Poisoning Attack:
1. Inject carefully crafted samples into the training set thereby poisoning it. Then, any model trained on the poisoned set will learn wrong abilities with wrong model parameters.
2. Damage part of the model’s structure, such as its loss function, to alter the model’s workflow
2. Evasion Attack:
1. Crafts the model input to induce an unsatisfactory output. Such input is defined as an adversarial example
----
##### DGM Concept:
A DGM with a training set $D_{train}$ that consists of numerous instances sampled from a real data distribution, $P_{real}$ and an expectation that the training data distribution $P_{train}$ approximates the real data distribution $P_{real}$. The model learns the real data distribution from the training set and aims to generate samples that seem to be real but are unseen. Here, x denotes a real sample in training set, $\bar{x}$ denotes a generated sample, and $D_{generated}$ and $P_{generated}$ denote the collection and distribution of the generated samples, respectively. For a generated data distribution $P_{generated}$
to be plausible, it must be close to the training data distribution, and therefore, in turn, close to the real data distribution. This can be expressed as $P_{generated} \approx P_{train} \approx P_{real}$ . To maintain diversity, latent code z is randomly sampled from a distribution defined as $P_z$. This is another representation of an input sample.
----
##### Comparison

----
##### Detailed Attack Methodologies
1. Evasion Attacks
2. MIAs(Membership)
3. Attribute Inference Attacks:
4. Model Extraction Attacks:
## Specific Attack Types
### Membersjip Inference Attack
#### 1. [Membership Inference Attacks against Machine Learning Models](https://arxiv.org/pdf/1610.05820.pdf)
##### Issues Solved:
- Privacy Leak from Machine Learning Models
##### Research Methods:
- Membership Inference Attacks
##### Outline:
Membership Inference: given a machine learning model and a record, determine whether this record was used as part of the model’s training dataset or not.