Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm
Keywords: quantum algorithm, statistical estimation, computational complexity, computational-statistical gap, optimization, variational quantum algorithm, quantum machine learning, statistical physics, average-case complexity
TL;DR: This paper analyzes the QAOA for the spiked tensor model, showing that while it matches classical performance at constant depths, it exhibits qualitative differences and a limited quantum advantage, indicating potential for future quantum speedups.
Abstract: The quantum approximate optimization algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization that has been a promising avenue for near-term quantum advantage.
In this paper, we analyze the performance of the QAOA on the spiked tensor model, a statistical estimation problem that exhibits a large computational-statistical gap classically.
We prove that the weak recovery threshold of $1$-step QAOA matches that of $1$-step tensor power iteration. Additional heuristic calculations suggest that the weak recovery threshold of $p$-step QAOA matches that of $p$-step tensor power iteration when $p$ is a fixed constant. This further implies that multi-step QAOA with tensor unfolding could achieve, but not surpass, the asymptotic classical computation threshold $\Theta(n^{(q-2)/4})$ for spiked $q$-tensors.
Meanwhile, we characterize the asymptotic overlap distribution for $p$-step QAOA, discovering an intriguing sine-Gaussian law verified through simulations. For some $p$ and $q$, the QAOA has an effective recovery threshold that is a constant factor better than tensor power iteration.
Of independent interest, our proof techniques employ the Fourier transform to handle difficult combinatorial sums, a novel approach differing from prior QAOA analyses on spin-glass models without planted structure.
Primary Area: Learning theory
Submission Number: 11801
Loading