Closed-form Sample Probing for Learning Generative Models in Zero-shot LearningDownload PDF

29 Sept 2021, 00:30 (edited 16 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: zero-shot learning, generative zero-shot learning, generative models
  • Abstract: Generative model based approaches have led to significant advances in zero-shot learning (ZSL) over the past few years. These approaches typically aim to learn a conditional generator that synthesizes training samples of classes conditioned on class definitions. The final zero-shot learning model is then obtained by training a supervised classification model over the real and/or synthesized training samples of seen and unseen classes, combined. Therefore, naturally, the generative model needs to produce not only relevant samples, but also those that are sufficiently rich for classifier training purposes, which is handled by various heuristics in existing works. In this paper, we introduce a principled approach for training generative models {\em directly} for training data generation purposes. Our main observation is that the use of closed-form models opens doors to end-to-end training thanks to the differentiability of the solvers. In our approach, at each generative model update step, we fit a task-specific closed-form ZSL model from generated samples, and measure its loss on novel samples all within the compute graph, a procedure that we refer to as {\em sample probing}. In this manner, the generator receives feedback directly based on the value of its samples for model training purposes. Our experimental results show that the proposed sample probing approach improves the ZSL results even when integrated into state-of-the-art generative models.
  • One-sentence Summary: We show how to train a conditional generative model in a way that directly maximizes the value of its samples for zero-shot model training purposes.
  • Supplementary Material: zip
17 Replies