An $\mathcal{A}$-adaptive Loop Unrolled Architecture for Solving Inverse Problems with Forward Model Mismatch
Keywords: Inverse problem, forward model mismatch, model inexactness, loop unrolling, variable splitting
TL;DR: This paper aims to address the forward model mismatch in inverse problems by introducing the forward model residual network to loop unrolled architecture and learning the updates in model parameters and reconstruction jointly.
Abstract: In inverse problems (IP) we aim to recover the underlying signal from noisy measurements that are generated according to a known forward model. Classical methods for solving IPs usually involve minimizing a least-squares data fidelity term together with a predetermined regularization function, which often leads to unsatisfactory reconstructions. \emph{loop unrolling} (LU) architecture addresses this issue by unrolling the optimization iterations into a sequence of neural networks that in effect learn a regularization function from data. While LU is currently a state-of-the-art method in many applications, the accuracy of the forward model is crucial for its success. This assumption can be limiting in many physical applications due to model simplifications or uncertainties in the apparatus. To address forward model mismatch, this work introduces a forward model residual network, and with an extra variable splitting step, the proposed method can adapt to uncertain forward models accordingly. The method achieves $\sim$ 2 dB PSNR increment in image blind deblurring and seismic blind deconvolution tasks by effectively learning the updates in reconstruction and forward model jointly.
Submission Number: 37
Loading