Abstract: Few-shot learning aims to use a limited amount of data to complete the model’s training, but complex models often face overfitting. To mitigate overfitting risks, we propose a new meta-learning method termed Meta-Pruning, which diverges from traditional pruning methods by treating pruning as a learnable task and training the model to discern and select beneficial network connections for new tasks. We propose to set the corresponding learning rate for each neural network parameter, use the learning rate as a learnable parameter, and adopt a new parameter updating method that reduces reliance on initial learning rate settings and lowers the manual tuning burden. Empirical evaluations on data sets indicate that our proposed algorithm achieves higher classification accuracy with similar computational complexity than state-of-the-art algorithms. We have also conducted learning rate comparison experiments to prove that the initial learning rate no longer affects our method.
Loading