Towards Adversarially Robust Condensed Dataset by Curvature Regularization

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: adversarial, robustness, dataset condensation, dataset distillation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Dataset condensation is a recent technique designed to mitigate the rising computational demands of training deep neural networks. It does so by generating a significantly smaller, synthetic dataset derived from a larger one. While an abundance of research has aimed at improving the accuracy of models trained on synthetic datasets and enhancing the efficiency of synthesizing these datasets, there has been a noticeable gap in research focusing on analyzing and enhancing the robustness of these datasets against adversarial attacks. This is surprising considering the appealing hypothesis that condensed datasets might inherently promote models that are robust to adversarial attacks. In this study, we first challenge this intuitive assumption by empirically demonstrating that dataset condensation methods are not inherently robust. This empirical evidence propels us to explore methods aimed at enhancing the adversarial robustness of condensed datasets. Our investigation is underpinned by the hypothesis that the observed lack of robustness originates from the high curvature of the loss landscape in the input space. Based on our theoretical analysis, we propose a new method that aims to enhance robustness by incorporating curvature regularization into the condensation process. Our empirical study suggests that the new method is capable of generating robust synthetic datasets that can withstand various adversarial attacks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7926
Loading