HOSC: Hyperbolic Oscillating Periodic Activations for Sharp Feature Preservation in Implicit Neural Representations

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Activation Functions, Neural Implicit Representations, High-Frequency Preservation, Periodic Activation Functions, Signal Encoding, Neural Networks
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Hyperbolic Oscillating Activation (HOSC) is a new activation that surpasses ReLU and SIREN in preserving high-frequencies and speeds up convergence, offering an alternative for encoding neural implicit representations of curves, images, and SDFs.
Abstract: In learning implicit neural representations of field functions, the choice of activations critically influences a model's capacity to encode intricate signal and pattern properties. Traditional activation functions, such as ReLU, and more recent ones like SIREN, serve the role to provide bases for the signal approximation. However, especially when it comes to preserving sharp features in signals like SDFs or RGB images, the choice of the activation plays a crucial role. In this work, we introduce a novel activation function that we denote as the Hyperbolic Oscillating Activation (H), defined as $\text{hosc}(x) = \tanh(a \sin(x))$. Our empirical evidence demonstrates HOSC's superior capability in preserving high-frequency sharp details in comparison to both SIREN and the non-periodic Rectified Linear Unit (ReLU) function, achieving faster convergence rates, and yielding lower losses in signal encoding tasks at reasonably small computational complexity overhead. When juxtaposed with ReLU and SIREN, HOSC offers notable advantages, underscoring its potential as a favored choice for implicit neural field networks. The research and evaluations presented in this paper affirm the potential of \HOSC{} as a robust, efficient, and high-performing periodic activation function for neural implicit fields of curves, images, and SDFs, opening avenues for further exploration in this domain.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2662
Loading