Continuous vs. Discrete Optimization of Deep Neural NetworksDownload PDF

Published: 09 Nov 2021, Last Modified: 20 Oct 2024NeurIPS 2021 SpotlightReaders: Everyone
Keywords: Deep Learning, Non-Convex Optimization, Gradient Flow, Gradient Descent
TL;DR: We present a theory quantifying the discrepancy between gradient flow and gradient descent over deep neural networks, and use it to translate an analysis of gradient flow into a new convergence guarantee for gradient descent.
Abstract: Existing analyses of optimization in deep learning are either continuous, focusing on (variants of) gradient flow, or discrete, directly treating (variants of) gradient descent. Gradient flow is amenable to theoretical analysis, but is stylized and disregards computational efficiency. The extent to which it represents gradient descent is an open question in the theory of deep learning. The current paper studies this question. Viewing gradient descent as an approximate numerical solution to the initial value problem of gradient flow, we find that the degree of approximation depends on the curvature around the gradient flow trajectory. We then show that over deep neural networks with homogeneous activations, gradient flow trajectories enjoy favorable curvature, suggesting they are well approximated by gradient descent. This finding allows us to translate an analysis of gradient flow over deep linear neural networks into a guarantee that gradient descent efficiently converges to global minimum almost surely under random initialization. Experiments suggest that over simple deep neural networks, gradient descent with conventional step size is indeed close to gradient flow. We hypothesize that the theory of gradient flows will unravel mysteries behind deep learning.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
Code: https://github.com/elkabzo/cont_disc_opt_dnn
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/continuous-vs-discrete-optimization-of-deep/code)
12 Replies

Loading