Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Zero-Shot Learning, Distribution Shift, Out of Distribution Generalization, Robust Representation Learning
TL;DR: This work tackles the challenge of learning data representations robust to class distribution shifts in zero-shot learning, by constructing synthetic data environments and harnessing out-of-distribution generalization techniques.
Abstract: Zero-shot learning methods typically assume that the new, unseen classes encountered during deployment come from the same distribution as the the classes in the training set. However, real-world scenarios often involve class distribution shifts (e.g., in age or gender for person identification), posing challenges for zero-shot classifiers that rely on learned representations from training classes. In this work, we propose and analyze a model that assumes that the attribute responsible for the shift is unknown in advance. We show that in this setting, standard training may lead to non-robust representations. To mitigate this, we develop an algorithm for learning robust representations in which (a) synthetic data environments are constructed via hierarchical sampling, and (b) environment balancing penalization, inspired by out-of-distribution problems, is applied. We show that our algorithm improves generalization to diverse class distributions in both simulations and experiments on real-world datasets.
Supplementary Material: zip
Primary Area: Evaluation (methodology, meta studies, replicability and validity)
Submission Number: 18272
Loading