Shallow and Deep Networks are Near-Optimal Approximators of Korobov FunctionsDownload PDF

29 Sept 2021, 00:34 (edited 14 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Abstract: In this paper, we analyze the number of neurons and training parameters that a neural network needs to approximate multivariate functions of bounded second mixed derivatives --- Korobov functions. We prove upper bounds on these quantities for shallow and deep neural networks, drastically lessening the curse of dimensionality. Our bounds hold for general activation functions, including ReLU. We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions, showing that neural networks are near-optimal function approximators.
  • One-sentence Summary: We analyze the number of neurons and training parameters that a neural network needs to approximate multivariate functions of bounded second mixed derivatives
17 Replies

Loading