Inductive Gradient Adjustment for Spectral Bias in Implicit Neural Representations

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Spectral Bias, Implicit Neural Representations, Training Dynamics
Abstract: Implicit Neural Representations (INRs), as a versatile representation paradigm, have achieved success in various computer vision tasks. Due to the spectral bias of the vanilla multi-layer perceptrons (MLPs), existing methods focus on designing MLPs with sophisticated architectures or repurposing training techniques for highly accurate INRs. In this paper, we delve into the linear dynamics model of MLPs and theoretically identify the empirical Neural Tangent Kernel (eNTK) matrix as a reliable link between spectral bias and training dynamics. Based on this insight, we propose a practical **I**nductive **G**radient **A**djustment (**IGA**) method, which could purposefully improve the spectral bias via inductive generalization of eNTK-based gradient transformation matrix. Theoretical and empirical analyses validate impacts of IGA on spectral bias. Further, we evaluate our method on different INRs tasks with various INR architectures and compare to existing training techniques. The superior and consistent improvements clearly validate the advantage of our IGA. Armed with our gradient adjustment method, better INRs with more enhanced texture details and sharpened edges can be learned from data by tailored impacts on spectral bias. The codes are available at: [https://github.com/LabShuHangGU/IGA-INR](https://github.com/LabShuHangGU/IGA-INR).
Lay Summary: Neural networks learning to represent complex data such as images or 3D shapes are referred to as **Implicit Neural Representations (INRs)**. INRs can capture global structures and offer continuous representations with arbitrary precision, but often struggle with learning sharped edges or textures due to the implicit training bias of neural networks called “spectral bias” -- a tendency to learn smooth patterns first. In our work, we characterize the training dynamics via the linear dynamics perspective and identify the empirical Neural Tangent Kernel (eNTK) matrix as a key link between spectral bias and training dynamics. Using this insight, we propose a new method -- **Inductive Gradient Adjustment (IGA)** -- that mitigates spectral bias by inductive generalization of a gradient transformation matrix derived from the eNTK matrix. Our IGA method is model-agnostic, working across different architectures and tasks, and leads to clearer representations, without changing the model structures. We hope the superior performance of our approach will inspire growing interest in training dynamics and implicit bias in neural networks—advancing model accuracy and even generalization through improved training strategies.
Link To Code: https://github.com/LabShuHangGU/IGA-INR
Primary Area: Deep Learning->Everything Else
Keywords: Spectral Bias, Implicit Neural Representations, Training Dynamics, Inductive Gradient Adjustment
Submission Number: 724
Loading