Keywords: Learning Theory, Multi-task and Transfer Learning, Adversarial Robustness
TL;DR: We give bounds for adversarially robust transfer learning.
Abstract: We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task.
In particular, we consider a multi-task representation learning (MTRL) setting, i.e., we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a
deep neural network).
In this general setting, we provide rates on~the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses.
These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments.
Additionally, we provide novel rates for the single-task setting.
Primary Area: Learning theory
Submission Number: 19715
Loading