Non-Linear Null Space Priors for Inverse Problems

ICLR 2026 Conference Submission21331 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inverse problems, null-space, denoising and compuational imaging
Abstract: Inverse problems underpin many computational imaging systems, yet are ill-posed due to the nontrivial null-space of the forward operator—signal components that are invisible to the acquisition. Existing methods typically impose generic image priors to regularize these blind directions, but they do not model the null-space structure itself. We introduce a Non-Linear Null-space Prior (NLNP) that jointly learns (i) the image manifold via noise-conditional denoising and (ii) a low-dimensional representation of selected null-space components. Concretely, a network predicts a null-space code from a noisy image, while a measurement encoder predicts the same code from the measurements; at reconstruction time, we penalize the mismatch of the prior and predictor network. Theoretically, we show that training the prior yields a projected Tweedie identity, so the network estimates the projected score of the data distribution, and the resulting regularizer injects orthogonal, state-dependent curvature in the null-space of the sensing matrix, improving conditioning without conflicting with data consistency. We integrate the prior into plug-and-play and validate the approach on compressed sensing and image restoration tasks.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 21331
Loading