On the Relation between Trainability and Dequantization of Variational Quantum Learning Models

ICLR 2025 Conference Submission6533 Authors

26 Sept 2024 (modified: 02 Dec 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: quantum machine learning, machine learning theory, quantum information theory
TL;DR: We prove that usual quantum machine learning models can be trainable and capable of solving classically-intractable problems simultaneously.
Abstract: Quantum machine learning (QML) explores the potential advantages of quantum computers for machine learning tasks, with variational QML among the main current approaches. While quantum computers promise to solve problems that are classically intractable, it has been recently shown that a particular quantum algorithm which outperforms all pre-existing classical algorithms can be matched by a newly developed classical approach (often inspired by the quantum algorithm). We say such algorithms have been dequantized. For QML models to be effective, they must be trainable and non-dequantizable. The relationship between these properties is still not fully understood and recent works raised into question to what extent we could ever have QML models which are both trainable and non-dequantizable. This challenges the potential of QML altogether. In this work we answer open questions regarding when trainability and non-dequantization are compatible. We first formalize the key concepts and put them in the context of prior research. We introduce the role of "variationalness" of QML models using well-known quantum circuit architectures as leading examples. Our results provide recipes for variational QML models that are trainable and non-dequantizable. By ensuring that variational QML models are both trainable and non-dequantizable, we pave the way toward practical relevance.
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6533
Loading