Revisiting Hidden Representations in Transfer Learning for Medical Imaging

Published: 11 Sept 2023, Last Modified: 11 Sept 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: While a key component to the success of deep learning is the availability of massive amounts of training data, medical image datasets are often limited in diversity and size. Transfer learning has the potential to bridge the gap between related yet different domains. For medical applications, however, it remains unclear whether it is more beneficial to pre-train on natural or medical images. We aim to shed light on this problem by comparing initialization on ImageNet and RadImageNet on seven medical classification tasks. Our work includes a replication study, which yields results contrary to previously published findings. In our experiments, ResNet50 models pre-trained on ImageNet tend to outperform those trained on RadImageNet. To gain further insights, we investigate the learned representations using Canonical Correlation Analysis (CCA) and compare the predictions of the different models. Our results indicate that, contrary to intuition, ImageNet and RadImageNet may converge to distinct intermediate representations, which appear to diverge further during fine-tuning. Despite these distinct representations, the predictions of the models remain similar. Our findings show that the similarity between networks before and after fine-tuning does not correlate with performance gains, suggesting that the advantages of transfer learning might not solely originate from the reuse of features in the early layers of a convolutional neural network.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/DovileDo/revisiting-transfer
Assigned Action Editor: ~Jessica_Schrouff1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1226
Loading