Understanding Bias Terms in Neural Representations

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Implicit Neural Representation, INR Classification, Neural Field, Coordinate-based Network
TL;DR: In this paper, we examine the impact and significance of bias terms in Implicit Neural Representations (INRs).
Abstract: In this paper, we examine the impact and significance of bias terms in Implicit Neural Representations (INRs). While bias terms are known to enhance nonlinear capacity by shifting activations in typical neural networks, we discover their functionality differs markedly in neural representation networks. Our analysis reveals that INR performance neither scales with increased number of bias terms nor shows substantial improvement through bias term gradient propagation. We demonstrate that bias terms in INRs primarily serve to eliminate \textit{spatial aliasing} caused by symmetry from both coordinates and activation functions, with input-layer bias terms yielding the most significant benefits. These findings challenge the conventional practice of implementing full-bias INR architecture. We propose using freezing bias terms exclusively in input layers, which consistently outperforms fully biased networks in signal fitting tasks. Furthermore, we introduce Feature-Biased INRs~(Feat-Bias), which initialize input-layer bias with high-level features extracted from pre-trained models. This feature-biasing approach effectively addresses the limited performance in INR post-processing tasks due to neural parameter uninterpretability, achieving superior accuracy while reducing parameter count and improving reconstruction quality.
Supplementary Material: zip
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 6611
Loading