Rademacher Complexity Over $\mathcal{H} \Delta \mathcal{H}$ Class for Adversarially Robust Domain AdaptationDownload PDF

Published: 01 Feb 2023, Last Modified: 27 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: domain adaptation, learning theory, adversarial learning
Abstract: In domain adaptation, a model is trained on a dataset generated from a source domain and its generalization is evaluated on a possibly different target domain. Understanding the generalization capability of the learned model is a longstanding question. Recent studies demonstrated that the adversarial robust learning under $\ell_\infty$ attack is even harder to generalize to different domains. To thoroughly study the fundamental difficulty behind adversarially robust domain adaptation, we propose to analyze a key complexity measure that controls the cross-domain generalization: the adversarial Rademacher complexity over $\mathcal{H} \Delta \mathcal{H}$ class. For linear models, we show that adversarial Rademacher complexity over $\mathcal{H} \Delta \mathcal{H}$ class is always greater than the non-adversarial one, which reveals the intrinsic hardness of adversarially robust domain adaptation. We also establish upper bounds on this complexity measure, and extend them to the ReLU neural network class as well. Finally, by properly extending our generalization bound for adversarially robust domain adaptation, we explain \emph{why adversarial training can help transferring the model performance to different domains}. We believe our results initiate the study of the generalization theory of adversarially robust domain adaptation, and could shed lights on distributed adversarially robust learning from heterogeneous sources -- a scenario typically encountered in federated learning applications.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Theory (eg, control theory, learning theory, algorithmic game theory)
TL;DR: This paper studies a variant of Rademacher complexity to analyze adversarially robust domain adaptation.
Supplementary Material: zip
30 Replies

Loading