Abstract: In robust machine learning, there is a widespread belief that samples can be decomposed into robust features (parts of the data that withstand small perturbations) and non-robust ones, and it is the role of the robust algorithm (i.e. adversarial training) to amplify the former and erase the latter. In this work, we challenge this view and try to position adversarial robustness as a more model-dependent property: many approaches that assume this simplistic distinction in the features, optimizing the data directly, only give rise to superficial adversarial robustness. We revisit prior approaches in the literature that were believed to be robust, and proceed to devise a principled meta-learning algorithm, that optimizes the dataset for robustness. Our method can be thought as a non-parametric version of adversarial training, and it is of independent interest and potentially wider applicability. Specifically, we cast the bi-level optimization as a min-max procedure on kernel regression, with a class of kernels that describe infinitely wide neural nets (Neural Tangent Kernels). Through extensive experiments we analyse the properties of the models trained on the optimized datasets and identify their shortcomings - all of them come in a similar flavor.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/can-we-achieve-robustness-from-data-alone/code)
4 Replies
Loading