Implicit Neural Representation Generation with Hypernetworks

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Hypernetworks, Implicit Neural Representations
TL;DR: We present a novel hypernetwork approach to generate Implicit Neural Representations
Abstract: Implicit Neural Representations (INRs) are versatile tools widely used for representing images, geometry, and radiance fields. By parameterizing target signals as neural networks, INRs provide superior advantages such as flexible resolutions. Recently, researchers have applied hypernetworks to generate INRs. However, current approaches either fail to capture dependencies between layers in the target network or lack scalability, limiting their effectiveness. In this paper, we introduce a novel hypernetwork framework that both models these dependencies and scales easily. Our approach treats the generation of target network parameters as an optimization process, using the chain rule to capture layer dependencies. We develop a simple yet effective tokenization mechanism that allows us to leverage Transformers as our architecture, ensuring scalability. Additionally, we introduce a practical weights initialization method that stabilizes the training process. Extensive experiments across various datasets consistently show our approach outperforms existing works.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 10791
Loading