Keywords: Quantum Geometric Tensor, Neural Network Quantum States, Fisher Information Matrix, Variational Monte Carlo, Natural Gradient
TL;DR: We introduce a block-diagonal approximation of the quantum geometric tensor for neural quantum states, enabling scalable natural-gradient optimization with improved conditioning and convergence in variational Monte Carlo simulations.
Abstract: The natural gradient is central in neural quantum states optimizations but it is limited by the cost of computing and inverting the quantum geometric tensor, the quantum analogue of the Fisher information matrix. We study a block-diagonal quantum geometric tensor that partitions the metric by network layers, analogous to block-structured Fisher methods such as K-FAC. This layer-wise approximation
preserves essential curvature while removing noisy cross-layer correlations, improving conditioning and scalability. Experiments on Heisenberg and frustrated $J_1$–$J_2$ models show faster convergence, lower energy, and improved stability.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 15
Loading