Rethinking the Training Shot Number in Robust Model-Agnostic Meta-LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Model-agnostic meta-learning (MAML) has been successfully applied to few-shot learning, but is not naturally robust to adversarial attacks. Previous methods attempted to impose robustness-promoting regularization on MAML's bi-level training procedure to achieve an adversarially robust model. They follow the typical MAML practice where training shot number is kept the same with test shot number to guarantee an optimal novel task adaptation. However, as observed by us, introducing robustness-promoting regularization into MAML reduces the intrinsic dimension of features, which actually results in a mismatch between meta-training and meta-testing in terms of affordable intrinsic dimension. Consequently, previous robust MAML methods sacrifice clean accuracy a lot. In this paper, based on our observations, we propose a simple strategy to mitigate the intrinsic dimension mismatch resulted by robustness-promoting regularization, i.e., increasing the number of training shots. Though simple, our method remarkably improves the clean accuracy of MAML without much loss of robustness. Extensive experiments demonstrate that our method outperforms prior arts in achieving a better trade-off between accuracy and robustness. Besides, we observe our method is less sensitive to the number of fine-tuning steps during meta-training, which allows for a reduced number of fine-tuning steps to improve training efficiency.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
4 Replies

Loading