Adaptive Learning of Quantum Hamiltonians

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Hamiltonian Learning, Quantum Learning Theory, Iterative Scaling, Convergence, Quasi-Newton Methods, Anderson Mixing
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: The challenge of learning representations for quantum Hamiltonian systems resides at the intersection of quantum information and learning theory. Viewed through the lens of learning theory, this task can be regarded as the non-commutative counterpart to learning graphical models. In our research, we design and analyze adaptive learning algorithms, including the quantum iterative scaling algorithm (QIS) and gradient descent (GD), for the Hamiltonian inference problem using adaptive Gibbs state oracles. Our principal technical contribution centers on the thorough analysis of their convergence rates, involving the establishment of both lower and upper bounds on the spectrum of the Jacobian matrix for each iteration of these algorithms. Furthermore, we explore quasi-Newton methods to enhance the performance of both QIS and GD. Specifically, we propose the use of Anderson mixing and the L-BFGS method for QIS and GD, respectively. These quasi-Newton techniques exhibit remarkable efficiency gains, resulting in orders of magnitude improvements in performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5716
Loading