Quantum Vision TransformersDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: quantum computing, quantum machine learning, quantum deep learning, transformers, vision transformers
TL;DR: We propose several quantum algorithms to mimic or enhance the transformer architecture, prove theoretical guarantees and provide experiments on real quantum hardware
Abstract: In this work, we design and analyse quantum transformers, extending the state-of-the-art classical transformer neural network architectures known to be very performant in natural language processing and image analysis. Building upon the previous work of using parametrised quantum circuits for data loading and orthogonal neural layers, we introduce three types of quantum transformers, including a quantum transformer based on compound matrices. These quantum architectures can be built using shallow quantum circuits and produce qualitatively different classification models. The three proposed quantum attention layers vary on the spectrum between closely following the classical transformers and exhibiting more quantum characteristics. We propose a method for loading a matrix as quantum states along with two new trainable quantum orthogonal layers adaptable to different levels of connectivity and quality of quantum computers. We performed extensive simulations of the quantum transformers on standard medical image datasets that showed competitive, and at times better, performance compared to the classical benchmarks, including the best-in-class classical vision transformers. The trained quantum transformers require fewer parameters as compared to the standard classical benchmarks, confirming the predicted computational advantage of our quantum attention layers with respect to the size of the classified images. Finally, we implemented our quantum transformers on superconducting quantum computers and obtained encouraging results for up to six qubit experiments.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
11 Replies

Loading