FSL-QuickBoost: Minimal-Cost Ensemble for Few-Shot Learning

Published: 20 Jul 2024, Last Modified: 01 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Few-shot learning (FSL) usually trains models on data from one set of classes, but tests them on data from a different set of classes, providing a few labeled support samples of the unseen classes as a reference for the trained model. Due to the lack of training data relevant to the target, there is usually high generalization error with respect to the test classes. Some existing methods attempt to address this generalization issue through ensemble. However, current ensemble-based FSL methods can be computationally expensive. In this work, we conduct empirical explorations and propose an ensemble method (namely QuickBoost), which is efficient and effective for improving the generalization of FSL. Specifically, QuickBoost includes an alternative-architecture pretrained encoder with a one-vs-all binary classifier (namely FSL-Forest) based on random forest algorithm, and is ensembled with the off-the-shelf FSL models via logit-level averaging. Extensive experiments on three benchmarks demonstrate that our method achieves state-of-the-art performance with good efficiency.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Media Interpretation
Relevance To Conference: This work tackles few-shot learning problems, which can accomodate different forms of media (e.g. few-shot facial recognition in computer vision or novel-class-text sentiment classification in natural language processing).
Submission Number: 449
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview