Feature Learning for the High Dimensional Stationary Sch\"odinger Equation with Deep Ritz Method

03 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Feature Learning, High Dimensional PDEs, Neural Networks
Abstract: This paper investigates feature learning within the framework of the deep Ritz method for solving the stationary Schr\"odinger equation with Neumann boundary conditions. We first analyze the convergence of Riemannian gradient descent in an agnostic setting, where the hypothesis function is restricted to a single-index model while the PDE solution is arbitrary. We prove that gradient descent reaches an approximate global minimum: after $T = O(\log(1/\epsilon))$ iterations, the loss is within $\epsilon$ of a constant multiple of the optimal loss. We then examine the loss landscape when the source term of the PDE itself follows a single-index model, considering hypothesis functions given by either a single-index model or a two-neuron multi-index model. In the single-index case, we show that the minimum Ritz energy is attained at the feature vector aligned with that of the source term. In the two-neuron case, we study the landscape of regularized Ritz losses and characterize how a second feature emerges, given that the first feature is aligned with the source, as the regularization parameter varies. Finally, numerical experiments are presented to validate the feature emergence theory in the two-neuron setting.
Supplementary Material: pdf
Primary Area: learning theory
Submission Number: 1348
Loading