Quantum Optimization via Gradient-Based Hamiltonian Descent

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: With rapid advancements in machine learning, first-order algorithms have emerged as the backbone of modern optimization techniques, owing to their computational efficiency and low memory requirements. Recently, the connection between accelerated gradient methods and damped heavy-ball motion, particularly within the framework of Hamiltonian dynamics, has inspired the development of innovative quantum algorithms for continuous optimization. One such algorithm, Quantum Hamiltonian Descent (QHD), leverages quantum tunneling to escape saddle points and local minima, facilitating the discovery of global solutions in complex optimization landscapes. However, QHD faces several challenges, including slower convergence rates compared to classical gradient methods and limited robustness in highly non-convex problems due to the non-local nature of quantum states. Furthermore, the original QHD formulation primarily relies on function value information, which limits its effectiveness. Inspired by insights from high-resolution differential equations that have elucidated the acceleration mechanisms in classical methods, we propose an enhancement to QHD by incorporating gradient information, leading to what we call gradient-based QHD. This gradient-based QHD achieves faster convergence and significantly increases the likelihood of identifying global solutions. Numerical simulations on challenging problem instances demonstrate that this gradient-based QHD outperforms existing quantum and classical methods by at least an order of magnitude.
Lay Summary: In machine learning, we often need to find the lowest point of a complex function, a bit like finding the bottom of a tricky valley. This is called continuous optimization. Traditional methods are good at this, but they can sometimes get stuck in "local minima"—spots that seem like the lowest point but aren't the absolute deepest. With quantum computing emerging as a powerful new tool, we wanted to see if these futuristic machines could help us find those true lowest points more effectively. Our paper shares an exciting discovery: by adding "gradient information" (which tells us the steepest way down the valley) to existing quantum methods, these algorithms can perform much better on tough optimization problems. Our new approach, called gradient-based Quantum Hamiltonian Descent, finds the actual lowest point much faster and more reliably. These findings open up new possibilities for designing quantum optimization algorithms. They also suggest that quantum computers could play a big role in speeding up how we train machine learning models in the future.
Link To Code: https://github.com/jiaqileng/Gradient-Based-QHD
Primary Area: Optimization->Non-Convex
Keywords: Quantum Algorithms, Non-Convex Optimization, Gradient-Based Methods, Hamiltonian Dynamics
Submission Number: 5059
Loading