TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model SmoothnessDownload PDF

21 May 2021, 20:47 (edited 28 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: robustness, ensemble, adversarial transferability, diversity
  • TL;DR: We propose an ensemble training approach, Transferability Reduced Smooth (TRS), to reduce the transferability among base models by enforcing low loss gradient similarity and model smoothness, which achieves state-of-the-art ensemble robustness.
  • Abstract: Adversarial Transferability is an intriguing property - adversarial perturbation crafted against one model is also effective against another model, while these models are from different model families or training processes. To better protect ML systems against adversarial attacks, several questions are raised: what are the sufficient conditions for adversarial transferability, and how to bound it? Is there a way to reduce the adversarial transferability in order to improve the robustness of an ensemble ML model? To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness. Our theoretical analysis shows that only promoting the orthogonality between gradients of base models is not enough to ensure low transferability; in the meantime, the model smoothness is an important factor to control the transferability. We also provide the lower and upper bounds of adversarial transferability under certain conditions. Inspired by our theoretical analysis, we propose an effective Transferability Reduced Smooth (TRS) ensemble training strategy to train a robust ensemble with low transferability by enforcing both gradient orthogonality and model smoothness between base models. We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines significantly.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
10 Replies

Loading