Abstract: Implicit Neural Representations (INRs) model continuous signals using compact neural networks, and they’ve become a standard tool in vision, graphics, and signal processing. A central challenge is capturing fine detail accurately without relying on heavy hand-crafted encodings or brittle training heuristics. Across the literature, periodic activations have emerged as a compelling remedy: from SIREN, which uses a single sinusoid with a fixed global frequency, to more recent architectures that use multiple sinusoids and sometimes learn their frequencies and phases. We study this family of sinusoidal activations and develop a principled theoretical and practical framework for trainable sinusoidal activations in INRs. We instantiate this framework with Sinusoidal Trainable Activation Functions (STAF), a Fourier-like activation whose amplitudes, frequencies, and phases are learned. Our analysis (i) establishes a Kronecker-equivalence construction that represents trainable sinusoidal activations using standard sine networks and quantifies how expressiveness grows, (ii) characterizes how the Neural Tangent Kernel (NTK) spectrum changes under a trainable sinusoidal parameterization, and (iii) provides an initialization method that produces standard normal post-activations without relying on asymptotic central limit theorem (CLT) arguments. Empirically, across images, audio, shapes, inverse problems (super-resolution and denoising), and NeRF, STAF is competitive and often superior in reconstruction fidelity, with consistently faster early-stage optimization. While periodic activations can reduce practical symptoms of spectral bias, our results show they do not eliminate it; instead, trainable sinusoids reshape the optimization landscape to improve the capacity–convergence trade-off.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~William_T_Redman1
Submission Number: 7747
Loading