A New Initialization to Control Gradients in Sinusoidal Neural Networks

ICLR 2026 Conference Submission18587 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Initialization Strategy, Deep Neural Networks, Sinusoidal Activations, Gradient Control, Implicit Neural Representations, Neural Tangent Kernel
TL;DR: We propose a closed form expression for parameters initialization in SIREN networks to control gradient
Abstract: Proper initialization strategy is of primary importance to mitigate gradient explosion or vanishing when training neural networks. Yet, the impact of initialization parameters still lacks a precise theoretical understanding for several well-established architectures. Here, we propose a new initialization for networks with sinusoidal activation functions such as $\texttt{SIREN}$, focusing on gradients control, their scaling with network depth, their impact on training and on generalization. To achieve this, we identify a closed-form expression for the initialization of the parameters, differing from the original $\texttt{SIREN}$ scheme. This expression is derived from fixed points obtained through the convergence of pre-activation distribution and the variance of Jacobian sequences. Controlling gradients prevents the emergence of inappropriate frequencies during estimation, thereby improving generalization. We further show that this initialization strongly influences training dynamics through the Neural Tangent Kernel framework (NTK). Finally, we benchmark $\texttt{SIREN}$ with the proposed initialization against the original scheme and other baselines on function fitting and image reconstruction. The new initialization consistently outperforms state-of-the-art methods.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 18587
Loading