Expressiveness of Neural Networks Having Width Equal or Below the Input DimensionDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Neural network approximation, expressiveness of width bounded neural networks, maximum principle
Abstract: The understanding about the minimum width of deep neural networks needed to ensure universal approximation for different activation functions has progressively been extended \citep{park2020minimum}. In particular, with respect to approximation on general compact sets in the input space, a network width less than or equal to the input dimension excludes universal approximation. In this work, we focus on network functions of width less than or equal to the latter critical bound. We prove a maximum principle from which we conclude that for all continuous and monotonic activation functions, universal approximation of arbitrary continuous functions is impossible on sets that coincide with the boundary of an open set plus an inner point. Conversely, we prove that in this regime, the exact fit of partially constant functions on disjoint compact sets is still possible for ReLU network functions under some conditions on the mutual location of these components. We also show that with cosine as activation function, a three layer network of width one is sufficient to approximate any function on arbitrary finite sets.
One-sentence Summary: We prove some theoretical results for neural network functions having width equal or below the input dimension.
Supplementary Material: zip
12 Replies

Loading