SMILE: Sample-to-feature MIxup for Efficient Transfer LEarningDownload PDF

06 Oct 2022 (modified: 05 May 2023)INTERPOLATE at NeurIPS 2022Readers: Everyone
Keywords: transfer learning, mixup, interpolation, regularization
Abstract: To improve the performance of deep learning, mixup has been proposed to force the neural networks favoring simple linear behaviors in-between training samples. Performing mixup for transfer learning with pre-trained models however is not that simple,  a high capacity pre-trained model with a large fully-connected (FC) layer could easily overfit to the target dataset even with samples-to-labels mixed up. In this work, we propose SMILE — \underline{S}ample-to-feature \underline{M}ixup for Eff\underline{I}cient Transfer \underline{LE}arning. With mixed images as inputs, SMILE regularizes the outputs of CNN feature extractors to learn from the mixed feature vectors of inputs, in addition to the mixed labels. SMILE incorporates a mean teacher to provide the surrogate "ground truth" for mixed feature vectors. Extensive experiments have been done to verify the performance improvement made by \TheName, in comparisons with a wide spectrum of transfer learning algorithms, including fine-tuning, L2-SP, DELTA, BSS, RIFLE, Co-Tuning and RegSL, even with mixup strategies combined.
0 Replies

Loading