AINR: Adaptive Learning of Activations for Implicit Neural Representations

24 Sept 2024 (modified: 15 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Implicit Neural Representations, Adaptive Activation Function Learning, MLPs
Abstract: Implicit Neural Representations (INRs) provide a continuous function learning framework for discrete signal representations. Using positional embeddings and / or specialized activation functions, INRs have overcome many limitations of traditional discrete representations. However, existing work primarily focuses on the use of a single activation function throughout the network, which often requires an exhaustive search for optimal activation parameters tailored to each signal and INR application. We hypothesize that this approach may restrict the representation power and generalization capabilities of INRs; limiting their broader applicability. In this paper, we introduce AINR, a method that adaptively learns the most suitable activation functions for INRs from a predefined dictionary. This dictionary includes activation functions such as Raised Cosines (RC), Root Raised Cosines (RRC), Prolate Spheroidal Wave Function (PSWF), Sinc, Gabor Wavelet, Gaussian, and Sinusoidal. Our method identifies the activation atom that is mostly matched for each layer of the INR based on the given signal. Experimental results demonstrate that AINR not only significantly improves INR performance across various tasks, such as image representation, image inpainting, 3D shape representation, novel view synthesis, super resolution, and reliable edge detection, but also eliminates the need for the previously required exhaustive search for activation parameters, which had to be conducted even before INR training could begin.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3978
Loading