Training a Vision Transformer from scratch in less than 24 hours with 1 GPUDownload PDF

Published: 20 Oct 2022, Last Modified: 10 Nov 2024HITY Workshop NeurIPS 2022Readers: Everyone
Keywords: Transformer, Convolution, Curriculum Learning, Training with budget
TL;DR: This paper introduces a new approach to train Vision Transformers from scratch in less than 24 hours with only 1 GPU
Abstract: Transformers have become central to recent advances in computer vision. However, training a vision Transformer (ViT) model from scratch can be resource intensive and time consuming. In this paper, we aim to explore approaches to reduce the training costs of ViT models. We introduce some algorithmic improvements to enable training a ViT model from scratch with limited hardware (1 GPU) and time (24 hours) resources. First, we propose an efficient approach to add locality to the ViT architecture. Second, we develop a new image size curriculum learning strategy, which allows to reduce the number of patches extracted from each image at the beginning of the training. Finally, we propose a new variant of the popular ImageNet1k benchmark by adding hardware and time constraints. We evaluate our contributions on this benchmark, and show they can significantly improve performances given the proposed training budget. We will share the code in https://github.com/BorealisAI/efficient-vit-training.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/training-a-vision-transformer-from-scratch-in/code)
4 Replies

Loading