# Response to R2
We thank the **Reviewer 5LEe** for the thoughtful review and insightful recommendations. Below we provide a response to key questions/comments that may alleviate major concerns:
## Comparison with ensemble trained on different domains
> Incremental Novelty: The proposed ensembles achieve “zero” transferability via the frequency-partitioning approach. But the “zero” transferability could also be realized by the ensembles of detectors that are trained with different domain features (e.g., pixel, frequency, light, and texture domain [“Face Forgery Detection by 3D Decomposition” in CVPR 2021]). These features need to be analyzed to highlight the innovation (like the pros and cons of frequency features) of this paper.
We acknowledge that an ensemble of detectors with different domain features could empirically achieve zero transferability, as they apply drastically different functions to the input, e.g., DCT vs. Texture extraction. However, each model is still a function of a common set of input (pixel) features – as such, we expect that their gradients are somewhat aligned. Thus, while an attack against one model/domain might not completely transfer, it might help make progress towards an attack against the other model/domain.
For example, a deepfake that is adversarial against the pixel-domain detector might not be adversarial against the texture-domain detector, but might still elicit high loss (suggesting that only minimal additional perturbation is needed to also fool the texture-domain classifier).
To confirm that this is indeed the case, we compute below the cosine-similarities between gradients of detectors that are trained on different domain features (e.g., pixel, frequency, and texture). We train pixel and frequency domain classifiers using the same approach detailed in our draft, and a texture domain classifier using the approach detailed in [1] (i.e., using global texture features extracted via a Gram Block). For each pair of models, we compute the loss gradient w.r.t to the input image for both the models and then compute their cosine similarity. We then report the average similarity based on 1000 images from the test set. We also repeat this experiment for the individual models of the D3-S(4) ensemble.
Pairwise Average Cosine Similarities between Gradients of Models : {Pixel, Frequency, Texture}
| Avg. Cosine Similarity | Pixel | Frequency | Texture |
| -----------------------|--------|-----------|---------|
| Pixel | 1 |6.34E-06 |2.26E-04 |
| Frequency |6.34E-06| 1 |-6.19E-05|
| Texture |2.26E-04|-6.19E-05 | 1 |
Pairwise Average Cosine Similarities between Gradients of D3-S(4) models : {Model 1, Model 2, Model 3, Model 4}
| Avg. Cosine Similarity | Model 1 | Model 2 | Model 3 | Model 4 |
| -----------------------|----------|---------|---------|---------|
| Model 1 | 1 | 7.13E-11|-7.19E-12|-3.23E-11|
| Model 2 | 7.13E-11 | 1 | 1.57E-11|-3.67E-13|
| Model 3 |-7.19E-12 | 1.57E-11| 1 |-1.39E-11|
| Model 4 | -3.23E-11|-1.39E-11|-3.67E-13| 1 |
We find that in comparison with D3-S(4), the pixel, frequency, texture models exhibit significantly higher pairwise cosine-similarities between their gradients (nearly 10^6 higher) . This implies that attacking the ensemble of these different-domain detectors will not be difficult. Preliminary experiments confirm that D3-S(4) indeed significantly outperforms this ensemble in adversarial accuracy, and we will add this as a baseline to our draft.
## Additional evaluation / clarification on existing results
> Experiments: (a) I'm wondering why the baseline (AT, ADP, GAL, DV) performs so poorly (in Table 1 \& 2, Fig. 5 \& 6), where APGD could easily achieve over 93\% attack success rate with a very small epsilon. The authors should guarantee that all the baselines are trained to their best performance. (b) The ablation studies are inadequate, e.g., the performance of the single detector, like D3-R(1) and D3-S(1), should also be presented for comparisons. (c) Insufficient generalization of testing data. More GAN-generated images (by BigGAN, CycleGAN, StarGAN, etc.) should be involved for generalization evaluation. (d) In Table 3 last column, why the attack success rate of APGD-CE at L2=1 is 0? Is that mean no adversarial subspace for the D3-S(4)? More explanations need to be stated.
1. We train all baselines to convergence of loss over the validation set, and experimented with both the default, and a variety of hyperparameters (e.g., varied regularization loss weights, additional epochs, etc). We also attempted to adversarially train the baseline ensembles – unfortunately they do not converge under adversarial training. Our and Carlini et al.’s [] findings also suggest that deepfakes require only minimal (i.e., small epsilon) perturbations to elicit misclassification when the classifier (here, the baselines) is not robust – this is likely due to the small visual differences between the two classes (deepfake vs. real).
2. The D3-R(1) and D3-S(1) settings become identical to the AT baseline, we will clarify this in the draft.
3. To address the concern about experimental generalization evaluation, we have conducted additional experiments for generalization evaluation on more GAN-generated images (i.e., BigGAN and StarGAN).
BigGAN was trained on the ImageNet dataset – in contrast to StyleGAN, it is a conditional GAN that generates images for a specific label (out of the 1000 ImageNet classes). We use 50 generated images per class, and randomly sample 50 real images per class from ImageNet. StarGAN was trained on the CelebaHQ dataset –- in contrast to StyleGAN and BigGAN, it is an Image-to-Image translation GAN that changes the "style”, i.e., hairstyle, eye color, etc of an existing real image. We use 30,000 generated images using randomly sampled styles, and 30,000 real images from the CelebaHQ dataset.
The tables below present adversarial accuracies (percentages, upto 100) of our best-performing ensemble, D3-S(4) as trained, tested, and attacked using the stronger white-box attacks on these additional datasets. We find that D3 continues to outperform, or perform on the level of the baselines. Together, we hope that they make our evaluation more diverse and comprehensive.
## StarGAN
| Attack | Epsilon | AT (1) | ADP (4) | GAL (4) | DVERGE (4) | D3-S (4) |
|-----------------|---------|--------|---------|---------|------------|----------|
| L2 PGD-CE (50) | 0.5 | 45.3 | 87.5 | 13.5 | 25.3 | **100** |
| L2 PGD-CE (50) | 1 | 9.5 | 0 | 0.9 | 11.6 | **100** |
| L2 PGD-CE (50) | 5 | 0 | 0 | 0 | **2.7** | 0.1 |
| L2 PGD-CE (50) | 10 | 0 | 0 | 0 | **2.7** | 0.1 |
| L2 PGD-CW (50) | 0.5 | 20.9 | 90.8 | 16.2 | 19.3 | **100** |
| L2 PGD-CW (50) | 1 | 3.3 | 0.1 | 0.3 | 5.8 | **100** |
| L2 PGD-CW (50) | 5 | 0 | 0 | 0 | **2.7** | 0.1 |
| L2 PGD-CW (50) | 10 | 0 | 0 | 0 | **2.7** | 0.1 |
| Attack | Epsilon | AT (1) | ADP (4) | GAL (4) | DVERGE (4) | D3-S (4) |
|-------------------|---------|--------|---------|---------|------------|----------|
| Linf PGD-CE (50) | 0.004 | 14.6 | 8.6 | 1.6 | 18.5 | **100** |
| Linf PGD-CE (50) | 0.016 | 0 | 0 | 0 | 3 | **99.6** |
| Linf PGD-CE (50) | 0.032 | 0 | 0 | 0 | 0 | **0.1** |
| Linf PGD-CE (50) | 0.064 | 0 | 0 | 0 | 0 | **0.1** |
| Linf PGD-CW (50) | 0.004 | 11.5 | 17.4 | 2.3 | 19.3 | **100** |
| Linf PGD-CW (50) | 0.016 | 0 | 0 | 0 | 2.7 | **99.8** |
| Linf PGD-CW (50) | 0.032 | 0 | 0 | 0 | 0 | **0.1** |
| Linf PGD-CW (50) | 0.064 | 0 | 0 | 0 | 0 | **0.1** |
## BigGAN
| Attack | Epsilon | AT (1) | ADP (4) | GAL (4) | DVERGE (4) | D3-S (4) |
|-----------------|---------|--------|---------|---------|------------|----------|
| L2 PGD-CE (50) | 0.5 | 23.6 | 9.2 | 59.4 | 13.6 | **91** |
| L2 PGD-CE (50) | 1 | 16.6 | 9.1 | 27.1 | 2.8 | **91** |
| L2 PGD-CE (50) | 5 | 14.4 | 9.1 | 9.4 | 0 | **70.5** |
| L2 PGD-CE (50) | 10 | 14.4 | 9.1 | 9.4 | 0 | **26.2** |
| L2 PGD-CW (50) | 0.5 | 25.1 | 9.3 | 61.6 | 14.5 | **91** |
| L2 PGD-CW (50) | 1 | 15.4 | 9.1 | 27.4 | 3.1 | **91** |
| L2 PGD-CW (50) | 5 | 14.4 | 9.1 | 9.4 | 0 | **71** |
| L2 PGD-CW (50) | 10 | 14.4 | 9.1 | 9.4 | 0 | **26.2** |
| Attack | Epsilon | AT (1) | ADP (4) | GAL (4) | DVERGE (4) | D3-S (4) |
|-------------------|---------|--------|---------|---------|------------|----------|
| Linf PGD-CE (50) | 0.004 | 16.2 | 9.1 | 41.5 | 5.5 | **91** |
| Linf PGD-CE (50) | 0.016 | 14.4 | 9.1 | 9.4 | 0 | **90** |
| Linf PGD-CE (50) | 0.032 | 14.4 | 9.1 | 9.4 | 0 | **78.3** |
| Linf PGD-CE (50) | 0.064 | 14.4 | 9.1 | 9.4 | 0 | **26.6** |
| Linf PGD-CW (50) | 0.004 | 15.9 | 9.1 | 2.3 | 5.4 | **91** |
| Linf PGD-CW (50) | 0.016 | 14.4 | 9.1 | 43.8 | 0 | **90.5** |
| Linf PGD-CW (50) | 0.032 | 14.4 | 9.1 | 9.4 | 0 | **79.6** |
| Linf PGD-CW (50) | 0.064 | 14.4 | 9.1 | 9.4 | 0 | **26.7** |
4. We note that the adversarial subspace for D3-S(4) at $L_2$ = 1 has not become 0; rather, it has reduced in dimensionality, thereby making it difficult for an adversary to find adversarial examples. In fact, when we increase the number of attack iterations to 2000, the robust accuracy decreases down to 91.5\%. Additionally, Figure 5 in Appendix A.3 (supplementary material) presents adversarial accuracies for larger $L_2$ perturbations, showing that D3-S(4) does expectedly eventually break.
## Performance of D3 on CIFAR10 and miscellaneous issues
>Minor Issues: (a) In line 313, the authors state that the CIFAR10 may not exhibit redundancy features. However, the teal line in Fig. 2 performs relatively well. I’m curious about the results of the CIFAR10 classification task, if time allows. (b) Line 242 & 243, the typo for repeat “D3-R(2)”, “D3-S(4)”. (c) The references should be consistent, e.g., [13] and [30].
1. We present below results for training, testing, and attacking the D3-S(4) ensemble, as well as the AT baseline, for the CIFAR10 classification task.
L2, APGD-CE, 50 steps
| Epsilon | 0.5 | 1 | 5 | 10 |
|----------|------|------|-----|-----|
| AT (1) | 15.1 | 13.6 | 0.9 | 0.8 |
| D3-S (4) | 63.5 | 39.7 | 8.2 | 6.2 |
Linf, APGD-CE, 50 steps
| Epsilon | 0.004 | 0.016 | 0.032 | 0.064 |
|----------|-------|-------|-------|-------|
| AT (1) | 1.7 | 0.1 | 0.1 | 0.1 |
| D3-S (4) | 59.8 | 1.2 | 0 | 0 |
While D3-S(4) offers some improvements at the smallest perturbation budgets, it quickly drops off – we suspect that a more carefully chosen feature space with redundancies for animal/vehicle classification would improve these results.
2. We will fix the typos.
3. We will make the reference formatting consistent.
## Discussion of limitations and societal impacts
We will add further discussion of these areas to the draft. Regarding limitations: we will further emphasize that applying D3 to CIFAR10, ImageNet, and other image forgery techniques would require additional investigation into useful and redundant feature spaces. Regarding societal impacts: we hope to highlight that the realism of modern deepfakes raises several threats, e.g, impersonation, disinformation, etc --- detecting deepfakes in a robust manner thus becomes a pressing problem. However, it is possible more capable adversaries could incorporate D3 in their pipeline to generate "better" deepfakes that are more realisic/can evade detectors. Adversarial deepfakes also have benign use cases, e.g., anonymization of an end-user on a online network; D3 would prevent this anonymization.
## References for this response
- [1] Zhengzhe Liu, Xiaojuan Qi, and Philip Torr. 2020. Global Texture Enhancement for Fake Face Detection in the Wild. In Proc. of CVPR.