Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Easing non-convex optimization with neural networks
David Lopez-Paz, Levent Sagun
Feb 12, 2018 (modified: Feb 13, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:Despite being non-convex, deep neural networks are surprisingly amenable to optimization by gradient descent. In this note, we use a deep neural network with $D$ parameters to parametrize the input space of a generic $d$-dimensional non-convex optimization problem. Our experiments show that minimizing the over-parametrized $D \gg d$ variables provided by the deep neural network eases and accelerates the optimization of various non-convex test functions.
TL;DR:deep neural networks can be used to ease generic nonconvex optimization problems
Keywords:nonconvex optimization, deep neural networks, overparametrized models
Enter your feedback below and we'll get back to you as soon as possible.