- Abstract: Understanding the worst case loss of a defense against a determined attack is important to evaluate the robustness of a particular classification algorithm to data poisoning attacks. Even though there are many methods for defending against attacks, they are dependent on the separability of the dataset representation. We pose this as a domain adaptation problem and learn a function in an adversarial setting to transform a dataset from a source domain to a target domain which has an established separability of clusters. The defenses thus obtained in the target domain show tighter upper bounds as compared to those in the source domain.
- TL;DR: Domain adaptation for data defenses