Correlated Weights in Infinite Limits of Deep Convolutional Neural NetworksDownload PDF

Published: 21 Dec 2020, Last Modified: 05 May 2023AABI2020Readers: Everyone
Keywords: ntk, nngp, infinite limit, gaussian process
TL;DR: Patch correlations seemed to disappear when taking infinite width limits of CNNs. We show that this can be avoided by adding spatial correlations in the prior.
Abstract: Infinite width limits of deep neural networks often have tractable forms. They have been used to analyse the behaviour of finite networks, as well as being useful methods in their own right. Currently used limits of deep convolutional networks lose correlating contributions from different spatial locations in the image, unlike their finite counterparts, even when those only use convolutions. We argue that this is undesirable, and remedy it by introducing spatial correlations in the prior over weights. This leads to correlated contributions being preserved in the wide limit. Varying the amount of correlation in the convolution weights allows interpolation between independent-weight limits and mean pooling (which is equivalent to complete correlation in the weights). Empirical evaluation of the infinitely wide network shows that optimal performance is achieved between the extremes, indicating the usefulness of considering correlations in the weights.
1 Reply

Loading