SLAM: Extending the Reach of Ising Machine Advantage

Matthew X. Burns, Zahra Azad, Yongchao Liu, Tony Geng, Hui Wu, Michael C. Huang

Published: 01 Feb 2026, Last Modified: 27 Jan 2026IEEE Transactions on ComputersEveryoneRevisionsCC BY-SA 4.0
Abstract: Dynamics-based Ising machines (IMs) are a promising substrate for high-speed combinatorial optimization and sampling. For problems that fit within their fixed capacity, they provide orders-of-magnitude speedups over conventional software algorithms. Once a problem exceeds hardware capacity, somewhat surprisingly, they become practically useless: Previous works generally relegate them to isolated sub-solvers. As our analysis will show, this approach produces no clear time or energy benefits, losing any inherent advantage of hardware IMs. After analyzing the shortcomings of previous hybrid proposals, we introduce a better method to extend IM advantage beyond hardware capacity. We call it the Stepped Large-neighborhood Annealing Method (SLAM), which drastically improves performance on combinatorial benchmarks with minimal host-side computation. We also propose novel architectural support to implement SLAM with low data movement overheads. The resulting augmented IM continues to exploit dynamical systems-enabled parallelism and thus maintains order of magnitude time-to-solution and energy-to-solution advantages over CPU-based algorithms. As a side benefit, our approach also significantly reduces the impact of device variation on solution quality, another perennial issue for analog optimizers.
Loading