On the Optimality of Activations in Implicit Neural Representations

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Implicit Neural Representations, Sampling theory
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Implicit neural representations (INRs) have recently surged in popularity as a class of neural networks capable of encoding signals as compact, differentiable entities. To capture high-frequency content, INRs often employ techniques such as Fourier positional encodings or non-traditional activation functions like Gaussian, sinusoid, or wavelets. Despite the impressive results achieved with these activations, there has been limited exploration of their properties within a unified theoretical framework. To address this gap, we conduct a comprehensive analysis of these activations from the perspective of sampling theory. Our investigation reveals that, particularly in the context of shallow INRs, sinc activations—previously unused in conjunction with INRs—are theoretically optimal for signal encoding. Additionally, we establish a connection between dynamical systems and INRs and leverage sampling theory to bridge these two paradigms. Notably, we showcase how the implicit architectural regularization inherent to INRs allows for their application in modeling such systems with minimal need for explicit regularizations.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3353
Loading