Keywords: inverse problems, plug-and-play methods, neural networks, Lipschitz control, learning activation functions
Abstract: Ill-posed linear inverse problems are frequently encountered in image reconstruction tasks. Image reconstruction methods that combine the Plug-and-Play (PnP) priors framework with convolutional neural network (CNN) based denoisers have shown impressive performances. However, it is non-trivial to guarantee the convergence of such algorithms, which is necessary for sensitive applications such as medical imaging. It has been shown that PnP algorithms converge when deployed with a certain class of averaged denoising operators. While such averaged operators can be built from 1-Lipschitz CNNs, imposing such a constraint on CNNs usually leads to a severe drop in performance. To mitigate this effect, we propose the use of deep spline neural networks which benefit from learnable piecewise-linear spline activation functions. We introduce "slope normalization" to control the Lipschitz constant of these activation functions. We show that averaged denoising operators built from 1-Lipschitz deep spline networks consistently outperform those built from 1-Lipschitz ReLU networks.
Conference Poster: pdf