ResNet strikes back: An improved training procedure in timmDownload PDF

Published: 24 Nov 2021, Last Modified: 22 Oct 2023ImageNet PPF 2021Readers: Everyone
Keywords: imagenet, convolutional neural netowrks
TL;DR: We revisit the training of the vanilla ResNet-50 and significantly improve the performance of this baseline
Abstract: The influential Residual Networks designed by He et al. remains the gold-standard architecture in numerous scientific publications. They typically serve as the default architecture in studies, or as baselines when new architectures are proposed. Yet there has been significant progress on best practices for training neural networks since the inception of the ResNet architecture in 2015. Novel optimization & data-augmentation have increased the effectiveness of the training recipes. In this paper, we re-evaluate the performance of the vanilla ResNet-50 when trained with a procedure that integrates such advances. We share competitive training settings and pre-trained models in the timm open-source library, with the hope that they will serve as better baselines for future work. For instance, with our more demanding training setting, a vanilla ResNet-50 reaches 80.4\% top-1 accuracy at resolution 224x224 on ImageNet-val without extra data or distillation. We also report the performance achieved with popular models with our training procedure.
Submission Track: Main track, 5 pages max
Reviewer Emails: htouvron@fb.com
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2110.00476/code)
1 Reply

Loading