Improving Foundation Models for Few-Shot Learning via Multitask FinetuningDownload PDF

Published: 04 Mar 2023, Last Modified: 16 May 2023ME-FoMo 2023 PosterReaders: Everyone
Keywords: Foundation model, Contrastive learning, Multitask finetuning, Few-shot learning
TL;DR: We provide a theoretical analysis showing that multitask finetuning can further improve the foundation model for downstream few-shot learning. Our experimental results on real data verify the improvement.
Abstract: Foundation models have become essential tools for AI. In this paper, we study the problem of adapting foundation models, pre-trained using contrastive learning, to downstream tasks with limited labels. We explore the paradigm of finetuning a foundation model before adapting to a target task, using a set of related tasks with a few labeled samples. We show both theoretically and empirically that with a diverse set of related tasks this finetuning leads to reduced error in the target task, when compared with directly adapting the same pre-trained model, e.g., at least 6\% target accuracy improvements on the miniImageNet.
0 Replies

Loading