Sparse Iso-FLOP Transformations for Maximizing Training Efficiency

Published: 28 Oct 2023, Last Modified: 01 Dec 2023WANT@NeurIPS 2023 PosterEveryoneRevisionsBibTeX
Keywords: sparsity, sparse training, efficient deep learning, efficient computations
TL;DR: We introduce a family of Sparse Iso-FLOP Transformations which can be used as drop-in replacements for dense layers to improve their modeling capacity and FLOP efficiency. We obtain significant wins across both CV and NLP domains.
Abstract: Recent works have explored the use of weight sparsity to improve the training efficiency (test accuracy w.r.t training FLOPs) of deep neural networks (DNNs). These works aim to reduce training FLOPs but training with sparse weights often leads to accuracy loss or requires longer training schedules, making the resulting training efficiency less clear. In contrast, we focus on using sparsity to increase accuracy while using the same FLOPS as the dense model and show training efficiency gains through higher accuracy. In this work, we introduce Sparse-IFT, a family of Sparse Iso-FLOP Transformations which are used as drop-in replacements for dense layers to improve their representational capacity and FLOP efficiency. Each transformation is parameterized by a single hyperparameter (sparsity level) and provides a larger search space to find optimal sparse masks. Without changing any training hyperparameters, replacing dense layers with Sparse-IFT leads to significant improvements across computer vision and natural language processing tasks, including ResNet-18 on ImageNet (+3.5\%) and GPT-3 Small on WikiText-103 (-0.4 PPL), both matching larger dense model variants that use 2x or more FLOPs. To our knowledge, this is the first work to demonstrate the use of sparsity for improving the accuracy of dense models via a simple set of sparse transformations. Code is available at: https://github.com/CerebrasResearch/Sparse-IFT.
Submission Number: 41
Loading