Neural network compression with heavy-tailed SGD

Published: 23 Oct 2023, Last Modified: 13 Nov 2023HeavyTails 2023EveryoneRevisionsBibTeX
Keywords: SGD, compressibility, heavy-tails, SDE, mean-field limits
TL;DR: For sufficiently overparametrized one-hidden layer neural networks, we propose to add heavy-tailed noise to each SGD iteration in order to achieve compressibility at any rate.
Abstract: Neural network compression has been an increasingly important subject, due to its practical implications in terms of reducing the computational requirements and its theoretical implications, as there is an explicit connection between compressibility and the generalization error. Recent studies have shown that the choice of the hyperparameters of stochastic gradient descent (SGD) can have an effect on the compressibility of the learned parameter vector. Even though these results have shed some light on the role of the training dynamics over compressibility, they relied on unverifiable assumptions and the resulting theory does not provide a practical guideline due to its implicitness. In this study, we propose a simple modification for SGD, such that the outputs of the algorithm will be provably compressible without making any nontrivial assumptions. We consider a one-hidden-layer neural network trained with SGD and we inject additive heavy-tailed noise to the iterates at each iteration. We then show that, for any compression rate, there exists a level of overparametrization (i.e., the number of hidden units), such that the output of the algorithm will be compressible with high probability. We illustrate our approach on experiments, where the results suggest that the proposed approach achieves compressibility with a slight compromise from the training and test error.
Submission Number: 11
Loading