## The Spectral Bias of Polynomial Neural Networks

29 Sept 2021, 00:34 (modified: 11 Feb 2022, 06:32)ICLR 2022 PosterReaders: Everyone
Keywords: Deep Neural Networks, Polynomials, Spectral Bias, Neural Tangent Kernel, Deep Image Prior, Infinite Width, Mercer Decomposition
Abstract: Polynomial neural networks (PNNs) have been recently shown to be particularly effective at image generation and face recognition, where high-frequency information is critical. Previous studies have revealed that neural networks demonstrate a $\text{\it{spectral bias}}$ towards low-frequency functions, which yields faster learning of low-frequency components during training. Inspired by such studies, we conduct a spectral analysis of the Neural Tangent Kernel (NTK) of PNNs. We find that the $\Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the learning of the higher frequencies. We verify the theoretical bias through extensive experiments. We expect our analysis to provide novel insights into designing architectures and learning frameworks by incorporating multiplicative interactions via polynomials.
One-sentence Summary: We study the spectral bias of polynomial networks and compare it with the spectral bias of standard neural nets using kernel approximations
27 Replies