Learn to Learn Consistently via Meta Self-distillation for Few-shot Classification

16 Sept 2025 (modified: 13 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Few-shot Learning, Meta Learning
Abstract: In few-shot learning, a model trained on disjoint base classes must solve novel tasks at test time using only a few of examples. A central challenge is shortcut bias: the model can overfit to spurious cues (e.g., background, noise, shape, color) that separate the few support examples during rapid adaptation but fail to generalize to larger query sets within novel tasks. In this paper, we first define learning consistency (i.e., the degree to which the model acquires similar knowledge when trained from different views of the same data), and show empirically that higher consistency reduces reliance on shortcuts and improves generalization. Building on this insight, we propose Learn to Learn Consistently (LLC), a simple yet effective meta-learning method that maximizes learning consistency during training. In the inner loop, the model is updated separately using different augmented views of the same support set. In the outer loop, the same query set is used to enforce consistency across the learned updates. Models initialized by LLC generalize better in the meta-testing phase. Extensive experiments demonstrate improved generalization across diverse settings and stronger learning consistency.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 7021
Loading