Beyond Heuristics: Globally Optimal Configuration of Implicit Neural Representations

16 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Implicit Neural Representations, Bayesian Optimization, Spectral Bias
TL;DR: OptiINR frames INR design as optimization via Bayesian optimization, finding layerwise activations and inits. It beats manual baselines on image, audio, and 3D; spectra show reconstructions near ground truth, replacing ad-hoc tuning.
Abstract: Implicit Neural Representations (INRs) have emerged as a transformative paradigm in signal processing and computer vision, excelling in tasks from image reconstruction to 3D shape modeling. Yet their effectiveness is fundamentally limited by the absence of principled strategies for optimal configuration—spanning activation selection, initialization scales, layer-wise adaptation, and their intricate interdependencies. These choices dictate performance, stability, and generalization, but current practice relies on ad-hoc heuristics, brute-force grid searches, or task-specific tuning, often leading to inconsistent results across modalities. We introduce OptiINR, the first unified framework that formulates INR configuration as a rigorous optimization problem. Leveraging Bayesian optimization, OptiINR efficiently explores the joint space of discrete activation families—sinusoidal (SIREN), wavelet-based (WIRE), variable-periodic (FINER)—and continuous initialization parameters. This systematic approach replaces fragmented manual tuning with a coherent, data-driven optimization process. By delivering globally optimal configurations, OptiINR establishes a principled foundation for INR design, consistently maximizing performance across diverse signal processing applications.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 6492
Loading