Global Optimality Conditions for Deep Neural NetworksDownload PDF

15 Feb 2018 (modified: 23 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting.
TL;DR: We provide efficiently checkable necessary and sufficient conditions for global optimality in deep linear neural networks, with some initial extensions to nonlinear settings.
Keywords: deep linear neural networks, global optimality, deep learning
7 Replies

Loading