everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Implicit Neural Representations (INRs) employ neural networks to represent continuous functions by mapping coordinates to the corresponding values of the target function, with applications e.g., inverse graphics. However, INRs face a challenge known as spectral bias when dealing with scenes containing varying frequencies. To overcome spectral bias, the most common approach is the Fourier features-based methods such as positional encoding. However, Fourier features-based methods will introduce noise to output, which degrades their performances when applied to downstream tasks. In response, this paper addresses this problem by first investigating the underlying causes through the lens of the Neural Tangent Kernel. Through theoretical analysis, we propose that using Fourier features embedding can be interpreted as fitting Fourier series expansion of the target function, from which we find that it is the insufficiency in the finitely sampled frequencies that causes the generation of noisy outputs. Leveraging these insights, we introduce bias-free MLPs as an adaptive linear filter to locally suppress unnecessary frequencies while amplifying essential ones by adjusting the coefficients at the coordinate level. Additionally, we propose a line-search-based algorithm to adjust the filter's learning rate dynamically, achieving Pareto efficiency between the adaptive linear filter module and the INRs. Extensive experiments demonstrate that our proposed method consistently improves the performance of INRs on typical tasks, including image regression, 3D shape regression, and inverse graphics.