Can Neural Networks Improve Classical Optimization of Inverse Problems?

03 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: Inverse problems, neural networks, iterative optimization, chaos, convergence
TL;DR: We show that a neural network training procedure can enhance the solution quality for general inverse problems without requiring domain knowledge.
Abstract: Finding the values of model parameters from data is an essential task in science. While iterative optimization algorithms like BFGS can find solutions to inverse problems with machine precision for simple problems, their reliance on local information limits their effectiveness for complex problems involving local minima, chaos, or zero-gradient regions. This study explores the potential for overcoming these limitations by jointly optimizing multiple examples. To achieve this, we employ neural networks to reparameterize the solution space and leverage the training procedure as an alternative to classical optimization. This approach is as versatile as traditional optimizers and does not require additional information about the inverse problems, meaning it can be added to existing general-purpose optimization libraries. We evaluate the effectiveness of this approach by comparing it to traditional optimization on various inverse problems involving complex physical systems, such as the incompressible Navier-Stokes equations. Our findings reveal significant improvements in the accuracy of the obtained solutions.
Supplementary Material: zip
Submission Number: 1581
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview