A Better Way to Decay: Proximal Gradient Training Algorithms for Neural NetsDownload PDF

Published: 23 Nov 2022, Last Modified: 14 Jul 2024OPT 2022 PosterReaders: Everyone
Keywords: Overparameterized Deep Learning, Neural Network Regularization, Representation Cost
TL;DR: We propose a novel proximal gradient algorithm for network training with weight decay objective that generalize better, find sparser solution, and is more robust.
Abstract: Weight decay is one of the most widely used forms of regularization in deep learning, and has been shown to improve generalization and robustness. The optimization objective driving weight decay is a sum of losses plus a term proportional to the sum of squared weights. This paper argues that stochastic gradient descent (SGD) may be an inefficient algorithm for this objective. For neural networks with ReLU activations, solutions to the weight decay objective are equivalent to those of a different objective in which the regularization term is instead a sum of products of $\ell_2$ (not squared) norms of the input and output weights associated each ReLU. This alternative (and effectively equivalent) regularization suggests a novel proximal gradient algorithm for network training. Theory and experiments support the new training approach, showing that it can converge much faster to the sparse solutions it shares with standard weight decay training.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/a-better-way-to-decay-proximal-gradient/code)
0 Replies

Loading