Let data talk: data-regularized operator learning theory for inverse problems

TMLR Paper7581 Authors

19 Feb 2026 (modified: 22 Feb 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Regularization plays a critical role in incorporating prior information into inverse problems. While numerous deep learning methods have been proposed to tackle inverse problems, the strategic placement of regularization remains a crucial consideration. In this article, we introduce an innovative approach known as ``data-regularized operator learning" (DaROL) method, specifically designed to address the regularization of inverse problems. In comparison to typical methods that impose regularization though the training of neural networks, the DaROL method trains a neural network on data that are regularized through well-established techniques including the Lasso regularization method and the Bayesian inference. Our DaROL method offers flexibility across various frameworks, and features a simplified structure that clearly delineates the processes of regularization and neural network training. In addition, we demonstrate that training a neural network on regularized data is equivalent to supervised learning for a regularized inverse mapping. Furthermore, we provide sufficient conditions for the smoothness of such a regularized inverse mapping and estimate the learning error with regard to neural network size and the number of training samples.
Submission Type: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=JhiMnuFxS0
Changes Since Last Submission: We removed the acknowledgment section to avoid revealing author information.
Assigned Action Editor: ~Sharan_Vaswani1
Submission Number: 7581
Loading