SEFAR: SparsE-FeAture-based Regularization for Fine-Tuning on Limited Downstream Data

17 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: finetune, transfer learning, regularization
TL;DR: In this paper, we propose a SparsE-FeAture-based Regularization (SEFAR) method that can significantly enhance the performance of any fine-tuning method when there is a limited amount of downstream data available.
Abstract: A commonly employed approach within the domain of transfer learning is fine-tuning the meticulous crafting of novel loss functions or the subtle adjustment of either all or a part of the parameters in the pre-trained network. However, most of the current fine-tuning methods typically require a substantial amount of downstream data, which can be limiting in real-world scenarios. When dealing with limited data, an appropriate regularization method can be used to enhance a model’s generalization capabilities and reduce the risk of overfitting. In this paper, we propose a SparsE-FeAture-based Regularization (SEFAR) method that can significantly enhance the performance of any fine-tuning method when there is a limited amount of downstream data available. Our proposed method is simple to implement: it leverages the results generated by sparse features to self-distill the results produced by complete features.This paper also provides insight into how the SEFAR works: one is a relation to the generalization bound of a kernel regression problem, and the other is the flatness of the minima. Additionally, extensive empirical experiments demonstrate the benefits of this method for fine- tuning on various datasets using different backbones. The code will be released soon.
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 830
Loading