Stop Wasting My Time! Saving Days of ImageNet and BERT Training with Latest Weight AveragingDownload PDF

Published: 20 Oct 2022, Last Modified: 10 Nov 2024HITY Workshop NeurIPS 2022Readers: Everyone
Keywords: deep learning, optimization, speed up, weight averaging, bert, imagenet, resnet
Abstract: Training vision or language models on large datasets can take days, if not weeks. We show that averaging the weights of the k latest checkpoints, each collected at the end of an epoch, can speed up the training progression in terms of loss and accuracy by dozens of epochs, corresponding to time savings up to ~68 and ~30 GPU hours when training a ResNet50 on ImageNet and RoBERTa-Base model on WikiText-103, respectively.
TL;DR: We show that averaging the weights of the $k$ latest checkpoints, each collected at the end of an epoch, can speed up the training progression in terms of loss and accuracy by dozens of epochs.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/stop-wasting-my-time-saving-days-of-imagenet/code)
3 Replies

Loading