Unsupervised Training of Convex Regularizers using Maximum Likelihood Estimation

TMLR Paper3244 Authors

26 Aug 2024 (modified: 06 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Imaging is a canonical inverse problem, where the task of reconstructing a ground truth from a noisy measurement is typically ill-conditioned or ill-posed. Recent state-of-the-art approaches for imaging use deep learning, spearheaded by unrolled and end-to-end models and trained on various image datasets. However, such methods typically require the availability of ground truth data, which may be unavailable or expensive, leading to a fundamental barrier that can not be addressed by choice of architecture. Unsupervised learning presents a powerful alternative paradigm that bypasses this requirement by allowing to learn directly from noisy measurement data without the need for any ground truth. A principled statistical approach to unsupervised learning is to maximize the marginal likelihood of the model parameters with respect to the given noisy measurements. This paper proposes an unsupervised learning approach that leverages maximum marginal likelihood estimation and stochastic approximation computation in order to train a convex neural network-based image regularization term directly on noisy measurements, improving upon previous work in both model expressiveness and dataset size. Experiments demonstrate that the proposed method produces image priors that are comparable in performance to the analogous supervised models for various image corruption operators, maintaining significantly better generalization properties when compared to end-to-end methods. Moreover, we provide a detailed theoretical analysis of the convergence properties of our proposed algorithm.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: * Fixed minor typos * Added short discussion of unsupervised diffusion models * Added clarification of Vidal et al. under TV regularization * Moved Section 3.1 (assumptions for convergence results) to the appendix * Add references for bias of SAPG * Added SSIM to experiments * Added train and test wallclock time for compared methods in appendix
Assigned Action Editor: ~Marwa_El_Halabi1
Submission Number: 3244
Loading