Stochastic gradient descent with noise of machine learning type. Part II: Continuous time analysisDownload PDFOpen Website

Published: 2021, Last Modified: 12 May 2023CoRR 2021Readers: Everyone
Abstract: The representation of functions by artificial neural networks depends on a large number of parameters in a non-linear fashion. Suitable parameters of these are found by minimizing a 'loss functional', typically by stochastic gradient descent (SGD) or an advanced SGD-based algorithm. In a continuous time model for SGD with noise that follows the 'machine learning scaling', we show that in a certain noise regime, the optimization algorithm prefers 'flat' minima of the objective function in a sense which is different from the flat minimum selection of continuous time SGD with homogeneous noise.
0 Replies

Loading