Distribution Learnability and Robustness

Published: 21 Sept 2023, Last Modified: 06 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: robustness, distribution learning
Abstract: We examine the relationship between learnability and robust learnability for the problem of distribution learning. We show that learnability implies robust learnability if the adversary can only perform additive contamination (and consequently, under Huber contamination), but not if the adversary is allowed to perform subtractive contamination. Thus, contrary to other learning settings (e.g., PAC learning of function classes), realizable learnability does not imply agnostic learnability. We also explore related implications in the context of compression schemes and differentially private learnability.
Supplementary Material: pdf
Submission Number: 8740
Loading