Accelerating Optimization using Neural ReparametrizationDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: optimization, graph neural networks, neural reparameterization, neural tangent kernel
Abstract: We tackle the problem of accelerating certain optimization problems related to steady states in ODE and energy minimization problems common in physics. We reparametrize the optimization variables as the output of a neural network. We then find the conditions under which this neural reparameterization could speed up convergence rates during gradient descent. We find that to get the maximum speed up the neural network needs to be a special graph convolutional network (GCN) with its aggregation function constructed from the gradients of the loss function. We show the utility of our method on two different optimization problems on graphs and point-clouds.
One-sentence Summary: We show that some optimization processes can be accelerated by making the parameters the output of a graph neural network with the graph arising from gradients of the loss.
Supplementary Material: zip
15 Replies

Loading