PCAInit: Training-Free Initialization for Image-Based Neural Representations

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Image Reconstruction, Implicit Neural Representations (INRs), Representation Learning, Representational Alignment, Weight Initialization, Weight Space
TL;DR: We introduce PCAInit, a novel training-free initialization method derived from an analysis of the relation between image space and weight space through principal component analysis.
Abstract: Implicit neural representations (INRs) have been widely used to model data as continuous functions parameterized by multi-layer perceptrons (MLPs). However, the relationship between the weight space of INRs and the underlying data space remains underexplored. In this paper, using SIREN as a baseline architecture, we study this connection through the lens of video frame reconstruction, which serves as a controlled setting where principal component analysis (PCA) reveals a striking alignment between image space and weight space. Building on this observation, we introduce \textit{PCAInit}, a novel training-free initialization strategy. We compare PCAInit with pretrained-based approaches that also offer higher reconstruction quality but come at the cost of additional training time: a meta-learned initialization and our two additional proposed methods. We show that PCAInit achieves the best overall reconstruction quality without extra training time. For example, on a representative DAVIS 2017 video (bear, 480p), PCAInit improves PSNR by up to +37.1\% over SIREN and +26.7\% over meta-learned initialization. Furthermore, we show that PCAInit generalizes beyond video frames, achieving the best PSNR on collections of images as well. Moreover, we demonstrate that PCAInit achieves high PSNR in additional evaluation tasks and exhibits strong universality through cross-video initialization experiments. Our results reveal a promising research direction on the interplay between image space and weight space in INRs, opening new avenues for future research on efficient INRs with improved reconstruction quality and broader applicability.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 18749
Loading