Explicit Induction Bias for Transfer Learning with Convolutional NetworksDownload PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We eventually recommend a simple $L^2$ penalty using the pre-trained model as a reference, and we show that this approach behaves much better than the standard scheme using weight decay on a partially frozen network.
TL;DR: In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.
Keywords: transfer Learning, convolutional networks, fine-tuning, regularization, induction bias
Data: [Caltech-256](https://paperswithcode.com/dataset/caltech-256), [ImageNet](https://paperswithcode.com/dataset/imagenet), [Places](https://paperswithcode.com/dataset/places)
9 Replies

Loading