Non-Gaussianity of Stochastic Gradient NoiseDownload PDFOpen Website

2019 (modified: 08 Nov 2022)CoRR 2019Readers: Everyone
Abstract: What enables Stochastic Gradient Descent (SGD) to achieve better generalization than Gradient Descent (GD) in Neural Network training? This question has attracted much attention. In this paper, we study the distribution of the Stochastic Gradient Noise (SGN) vectors during the training. We observe that for batch sizes 256 and above, the distribution is best described as Gaussian at-least in the early phases of training. This holds across data-sets, architectures, and other choices.
0 Replies

Loading