Interpretable and robust blind image denoising with bias-free convolutional neural networksDownload PDF

Published: 21 Oct 2019, Last Modified: 05 May 2023NeurIPS 2019 Deep Inverse Workshop PosterReaders: Everyone
TL;DR: We show that removing constant terms from CNN architectures provides interpretability of the denoising method via linear-algebra techniques and also boosts generalization performance across noise levels.
Keywords: Blind image denoising, interpretability of deep neural networks, Generalization in deep neural networks
Abstract: Deep convolutional networks often append additive constant ("bias") terms to their convolution operations, enabling a richer repertoire of functional mappings. Biases are also used to facilitate training, by subtracting mean response over batches of training images (a component of "batch normalization"). Recent state-of-the-art blind denoising methods seem to require these terms for their success. Here, however, we show that bias terms used in most CNNs (additive constants, including those used for batch normalization) interfere with the interpretability of these networks, do not help performance, and in fact prevent generalization of performance to noise levels not including in the training data. In particular, bias-free CNNs (BF-CNNs) are locally linear, and hence amenable to direct analysis with linear-algebraic tools. These analyses provide interpretations of network functionality in terms of projection onto a union of low-dimensional subspaces, connecting the learning-based method to more traditional denoising methodology. Additionally, BF-CNNs generalize robustly, achieving near-state-of-the-art performance at noise levels well beyond the range over which they have been trained.
1 Reply

Loading