Abstract: In Transfusion: Understanding Transfer Learning forMedical Imaging by Raghu et al. [1]
(hereafter “their paper”), the authors investigate the efficacy of transfer learning from
natural image classification to medical image classification. In their paper, a compar-
ison is made between the performance of models that are trained to convergence on
ImageNet and then trained on the medical task, and models that are only trained on the
task. They found that in all cases the medical task accuracies differed insignificantly be-
tween the models with transfer learning and those without, so long as enough data was
used. Two state-of-the-art models, ResNet50 and InceptionV3 were compared, as well as
a family of smaller CNN models, on the RETINA [2] and CheXpert [3] data sets. We repro-
duce their work for the state-of-the-art models for the RETINA task on a similar, publicly
available dataset, and offer an alternate interpretation of for these experiments.
We suggest that rather than the convergence of random and transfer-initialized models
marginalizing the usefulness of transfer techniques, it is interesting that models trans-
fered from such disparate domains do not result in overall worse performance.
Track: Replicability
NeurIPS Paper Id: https://openreview.net/forum?id=B1e_PNSxLS
4 Replies
Loading