Expressivity of Shallow and Deep Neural Networks for Polynomial ApproximationDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 06 May 2023CoRR 2023Readers: Everyone
Abstract: This study explores the number of neurons required for a Rectified Linear Unit (ReLU) neural network to approximate multivariate monomials. We establish an exponential lower bound on the complexity of any shallow network approximating the product function over a general compact domain. We also demonstrate this lower bound doesn't apply to normalized Lipschitz monomials over the unit cube. These findings suggest that shallow ReLU networks experience the curse of dimensionality when expressing functions with a Lipschitz parameter scaling with the dimension of the input, and that the expressive power of neural networks is more dependent on their depth rather than overall complexity.
0 Replies

Loading