FEW-SHOTLEARNING WITH WEAK SUPERVISIONDownload PDF

13 Mar 2021 (modified: 05 May 2023)Learning to Learn 2021Readers: Everyone
Keywords: meta-learning
TL;DR: We propose a Bayesian gradient-based meta-learning algorithm that can exploit weak supervision to reduce task ambiguity and improve performance.
Abstract: Few-shot meta-learning methods aim to learn the common structure shared across a set of tasks to facilitate learning new tasks with small amounts of data. However, provided only a few training examples, many tasks are ambiguous. Such ambiguity can be mitigated with side information in terms of weak labels which is often readily available. In this paper, we propose a Bayesian gradient-based meta-learning algorithm that can incorporate weak labels to reduce task ambiguity and improve performance. Our approach is cast in the framework of amortized variational inference and trained by optimizing a variational lower bound. The proposed method is competitive to state-of-the-art methods and achieves significant performance gains in settings where weak labels are available.
Proposed Reviewers: Ali Ghadirzadeh, ghadiri@stanford.edu Petra Poklukar, poklukar@kth.se Mårten Björkman, celle@kth.se
0 Replies

Loading