Exploring the Approximation Capabilities of Multiplicative Neural Networks for Smooth Functions

Published: 06 Jun 2023, Last Modified: 06 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Multiplication layers are a key component in various influential neural network modules, including self-attention and hypernetwork layers. In this paper, we investigate the approximation capabilities of deep neural networks with intermediate neurons connected by simple multiplication operations. We consider two classes of target functions: generalized bandlimited functions, which are frequently used to model real-world signals with finite bandwidth, and Sobolev-Type balls, which are embedded in the Sobolev Space $\mathcal{W}^{r,2}$. Our results demonstrate that multiplicative neural networks can approximate these functions with significantly fewer layers and neurons compared to standard ReLU neural networks, with respect to both input dimension and approximation error. These findings suggest that multiplicative gates can outperform standard feed-forward layers and have potential for improving neural network design.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: In the camera-ready revision, we've added the following parts to the manuscript: 1. We fixed various typos and notations. 2. We included further citations in the bibliographic review. 3. We added an additional discussion about the implication of our work in real world applications. Specifically, in signal processing and deep learning based PDE solvers. 4. We conducted an additional round of proofreading to the main text and the appendices. 5. We revised the bibliography and removed duplicate references.
Assigned Action Editor: ~Murat_A_Erdogdu1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 766
Loading