Keywords: Implicit Function Theorem, Semantics in Weights, Weight Space Learning, Implicit Neural Representations, Hypernetworks
TL;DR: Implicit Function Theorem provides a theoretical guarantee for using "weights as a modality'' in downstream tasks such as classification.
Abstract: Implicit Neural Representations (INRs) have demonstrated remarkable capability in representing 2D and 3D data, whose semantics is convincingly captured in the weights of the corresponding neural network.
Despite successes in applying INRs to various applications, a precise theoretical explanation for the mechanism of encoding semantics of data into network weights is still missing.
In this work, we propose a jointly trainable \emph{HyperINR} model, which learns a hypernetwork to map a learnable low-dimensional latent space to the weight space of an INR.
By employing the Implicit Function Theorems, the \emph{HyperINR} is shown to be a mathematically rigorous framework that ensures the mapping of the semantics of data to the weight latent representations.
Extensive experiments of classification tasks on 2D and 3D data confirm the effectiveness of our approach and demonstrate superior performance compared to state-of-the-art methods.
Submission Number: 26
Loading