Abstract: Despite being non-convex, deep neural networks are surprisingly amenable to optimization by gradient descent. In this note, we use a deep neural network with $D$ parameters to parametrize the input space of a generic $d$-dimensional non-convex optimization problem. Our experiments show that minimizing the over-parametrized $D \gg d$ variables provided by the deep neural network eases and accelerates the optimization of various non-convex test functions.
TL;DR: deep neural networks can be used to ease generic nonconvex optimization problems
Keywords: nonconvex optimization, deep neural networks, overparametrized models
4 Replies
Loading