Multi-Lattice Sampling of Quantum Field Theories via Neural Operator-based Flows

Published: 03 Mar 2024, Last Modified: 04 May 2024AI4DiffEqtnsInSci @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Lattice field theory; neural operators; continuous normalizing flow;
Abstract: We consider the problem of sampling discrete field configurations $\phi$ from the Boltzmann distribution $[d\phi] Z_1^{-1} e^{-S_1[\phi]}$, where $S_1$ is the lattice-discretization of the continuous Euclidean action $\mathcal S_1$ of some quantum field theory. Since such densities arise as the approximation of the underlying functional density $[\mathcal D\phi(x)] \mathcal Z_1^{-1} e^{-\mathcal S_1[\phi(x)]}$, we frame the task as an instance of operator learning. In particular, we propose to approximate a time-dependent operator $\mathcal V_t$ whose time integral provides a mapping between the functional distributions of the free theory $[\mathcal D\phi(x)] \mathcal Z_0^{-1} e^{-\mathcal S_{0}[\phi(x)]}$ and of the target theory $[\mathcal D\phi(x)]\mathcal Z_1^{-1}e^{-\mathcal S_1[\phi(x)]}$. Once a particular lattice is chosen, the operator $\mathcal V_t$ can be discretized to a finite dimensional, time-dependent vector field $V_t$ which in turn induces a continuous normalizing flow between finite dimensional distributions over the chosen lattice. This flow can then be trained to be a diffeormorphism between the discretized free and target theories $[d\phi] Z_0^{-1} e^{-S_{0}[\phi]}$, $[d\phi] Z_1^{-1}e^{-S_1[\phi]}$. We run experiments on the 2-dimensional $\phi^4$-theory to explore to what extent such operator-based flow architectures generalize to lattice sizes they were not trained on and show that pretraining on smaller lattices can lead to speedup over training directly on the target lattice size.
Submission Number: 2
Loading