Imbalanced Few-Shot Learning Based on Meta-transfer Learning

Published: 01 Jan 2023, Last Modified: 09 Nov 2024ICANN (8) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Few-shot learning is a challenging task that aims to learn to adapt to new tasks with only a few labeled samples. Meta-learning is a promising approach to address this challenge, but the learned meta-knowledge on training sets may not always be useful due to class imbalance, task imbalance, and distribution imbalance. In this paper, we propose a novel few-shot learning method based on meta-transfer learning, which is called Meta-Transfer Task-Adaptive Meta-Learning (MT-TAML). Meta-transfer learning is used to transfer the weight parameters of a pre-trained deep neural network, which makes up for the deficiency of using shallow networks as the feature extractor. To address the imbalance problem in realistic few-shot learning scenarios, we introduce a learnable parameter balance meta-knowledge for each task. Additionally, we propose a novel task training strategy that selects the difficult class in each task and re-samples from it to form the difficult task, thereby improving the model’s accuracy. Our experimental results show that MT-TAML outperforms existing few-shot learning methods by 2–4%. Furthermore, our ablation experiments confirm the effectiveness of the combination of meta-transfer learning and learnable equilibrium parameters.
Loading