Towards Efficient On-Chip Training of Quantum Neural NetworksDownload PDF

29 Sept 2021 (modified: 31 Oct 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Quantum Computing, Machine Learning, Neural Networks, Robustness, Quantum Machine Learning, Quantum Neural Networks, On-Chip, Training
Abstract: Quantum Neural Network (QNN) is drawing increasingly more research interest thanks to its potential to achieve quantum advantage on near-term Noisy Intermediate Scale Quantum (NISQ) hardware. In order to achieve scalable QNN learning, the training process needs to be offloaded to real quantum machines instead of using exponential-cost classical simulators. One common approach to obtain QNN gradients is parameter shift whose cost scales linearly with the number of qubits. This work presents the first experimental demonstration of practical on-chip QNN training with parameter shift. Nevertheless, we find that due to the significant quantum errors (noises) on real machines, gradients obtained from naive parameter shift have low fidelity and thus degrade the training accuracy. To this end, we further propose probabilistic gradient pruning to firstly identify gradients with potentially large errors and then remove them. Specifically, small gradients have larger relative errors than large ones, thus having a higher probability to be pruned. We perform extensive experiments on 5 classification tasks with 5 real quantum machines. The results demonstrate that our on-chip training achieves over 90% and 60% accuracy for 2-class and 4-class image classification tasks. The probabilistic gradient pruning brings up to 7% \qnn accuracy improvements over no pruning. Overall, we successfully obtain comparable accuracy with noise-free simulation but have much better training scalability. We also open-source our PyTorch library for on-chip \qnn training with parameters shift and easy deployment at this link: https://anonymous.4open.science/r/iclr-on-chip-qnn-572E .
One-sentence Summary: We demonstrate high scalability and efficiency of on-chip training and inference of quantum neural networks with parameter shift, and propose a gradient pruning method to mitigate quantum noise during training.
12 Replies

Loading