Pruning Feature Extractor Stacking for Cross-domain Few-shot Learning

TMLR Paper3573 Authors

28 Oct 2024 (modified: 17 Jan 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Combining knowledge from source domains to learn efficiently from a few labelled instances in a target domain is a transfer learning problem known as cross-domain few-shot learning (CDFSL). Feature extractor stacking (FES) is a state-of-the-art CDFSL method that maintains a collection of source domain feature extractors instead of a single universal extractor. FES uses stacked generalisation to build an ensemble from extractor snapshots saved during target domain fine-tuning. It outperforms several contemporary universal model-based CDFSL methods in the Meta-Dataset benchmark. However, it incurs higher storage cost because it saves a snapshot for every fine-tuning iteration for every extractor. In this work, we propose a bidirectional snapshot selection strategy for FES, leveraging its cross-validation process and the ordered nature of its snapshots, and demonstrate that a 95% snapshot reduction can be achieved while retaining the same level of accuracy.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The manuscript is revised to address comments from the reviewers. New references are added to the literature review. Some parts of the literature review and method sections are revised for greater detail and clarity. Complexity and scalability analysis is added. The list of baselines discussed is updated. Additional discussion on the results is included. Please refer to our response to the reviewers' comments for the individual changes.
Assigned Action Editor: ~Kevin_Swersky1
Submission Number: 3573
Loading