# Author Response
Dear Reviewer 9HCi,
thank you for your further comments and we do understand your concerns. We are deeply sorry that our previous submission may lead you some misunderstandings. We would like to further clarify them as follows:
---
**Q1**: After reading the reviews of reviewer y4Xt, I'm in accordance with the point of experimenting on more challenging datasets. From the numbers, most are close to 100, which might indicate that CIFAR10 is a relatively easy dataset.
**R1**: We understand your concerns and agree that our method should be evaluated on datasets other than CIFAR-10. However, we are deeply sorry that there are some misunderstandings that we want to clarify.
- We agree that CIFAR-10 is a relatively simple dataset. However, it is not because most of the (WSR) numbers are close to 100. In general, **the WSR can reach nearly 100% before removal attacks, no matter whether the dataset is simple or complicated**. This is mostly because DNNs intend to learn 'short-cut' simple features, such as watermark-related features.
- In addition, **we have included the experiments on the CIFAR-100 dataset in our appendix**, but we failed to mention it in the main contents of our previous submission (as mentioned in R2 to Reviewer 6TWD). For your convenience, we include this table as follows:
| Type | Method | BA | WSR | FT | FP | ANP | NAD | MCR | NNL | AvgDrop |
|---------|--------|-------|--------|-------|-------|--------|-------|-------|-------|----------|
| Content | BD | **74.09** | 98.51 | 32.32 | 1.57 | 87.69 | 1.85 | 18.67 | 0.45 | $\downarrow$ 74.75 |
| | EW | 73.75 | 98.21 | 18.63 | 1.44 | 88.51 | 0.79 | 2.28 | 2.52 | $\downarrow$ 79.18 |
| | CW | 73.75 | 99.08 | 8.57 | 0.18 | 66.42 | 1.38 | 6.14 | 0.17 | $\downarrow$ 85.27 |
| | Ours | 73.69 | **99.62** | **97.74** | **97.25** | **99.39** | **93.50** | **97.04** | **20.02** | $\downarrow$ **15.46** |
| | | | | | | | | | | |
| Noise | BD | **74.13** | 99.94 | 60.54 | **10.03** | 96.55 | 20.57 | 52.77 | 0.12 | $\downarrow$ 59.85 |
| | EW | 73.43 | 99.87 | 10.73 | 9.79 | 95.62 | 6.69 | 8.75 | **12.99** | $\downarrow$ 75.78 |
| | CW | 73.49 | 99.98 | 24.38 | 1.80 | 55.95 | 3.28 | 38.44 | 0.05 | $\downarrow$ 79.33 |
| | Ours | 72.97 | **100.00** | **84.82** | 8.60 | **99.99** | **73.67** | **93.82** | 0.98 | $\downarrow$ **39.69** |
| | | | | | | | | | | |
| Unrelated | BD | **73.80** | **100.00** | 6.83 | 1.50 | 92.25 | 6.25 | 12.58 | 11.42 | $\downarrow$ 78.19 |
| | EW | 73.57 | **100.00** | 27.67 | 3.42 | 93.33 | 18.25 | 17.75 | 40.25 | $\downarrow$ 66.56 |
| | CW | 73.45 | 99.83 | 0.25 | 1.08 | 41.08 | 4.08 | 7.67 | 0.58 | $\downarrow$ 90.71 |
| | Ours | 72.27 | **100.00** | **97.42** | **44.67** | **100.00** | **94.08** | **97.25** | **45.17** | $\downarrow$ **20.24** |
The aforementioned results show that our method is still significantly better than all baseline defenses on the CIFAR-100 dataset. **These results indicate that our method scales reasonably with respect to the number of classes and more complicated datasets**.
---
**Q2**: I'm not convinced by the novelty of cBN. Though it is indispensable of your method, it still contains limited novelty.
**R2**: Thank you for your comments and we do understand your concerns. However, we respectfully disagree with the opinion that our cBN has limited novelty.
- To the best of our knowledge, we were the first to propose cBN, although there have been a few previous works that introduced some BN variants [1-3].
- We hope the contribution/novelty of cBN can be assessed by its contributions and impacts to our final method instead of being simply measured by its technical complexity. Being simple is not necessarily a bad thing as future model watermarks can easily adopt it trying to improve their method.
References
1. Chang, Woong-Gi, et al. "Domain-specific batch normalization for unsupervised domain adaptation." Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 2019.
2. Xie, Cihang, and Alan Yuille. "Intriguing Properties of Adversarial Training at Scale." International Conference on Learning Representations. 2019.
3. Xie, Cihang, et al. "Adversarial examples improve image recognition." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
---