2019 (modified: 11 Nov 2022)ICML 2019Readers: Everyone
Abstract:Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-ro...