Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex OptimizationDownload PDFOpen Website

2019 (modified: 11 Nov 2022)ICML 2019Readers: Everyone
Abstract: Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-co...
0 Replies

Loading