Predictive Performance of Deep Quantum Data Re-uploading Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We investigate the predictive performance of data re-uploading models with deep encoding layers and demonstrate that their performance degrades to near-random guessing as the number of encoding layers increases.
Abstract: Quantum machine learning models incorporating data re-uploading circuits have garnered significant attention due to their exceptional expressivity and trainability. However, their ability to generate accurate predictions on unseen data, referred to as the predictive performance, remains insufficiently investigated. This study reveals a fundamental limitation in predictive performance when deep encoding layers are employed within the data re-uploading model. Concretely, we theoretically demonstrate that when processing high-dimensional data with limited-qubit data re-uploading models, their predictive performance progressively degenerates to near random-guessing levels as the number of encoding layers increases. In this context, the repeated data uploading cannot mitigate the performance degradation. These findings are validated through experiments on both synthetic linearly separable datasets and real-world datasets. Our results demonstrate that when processing high-dimensional data, the quantum data re-uploading models should be designed with wider circuit architectures rather than deeper and narrower ones.
Lay Summary: A model architecture called data re-uploadings is currently popular in the quantum machine learning community. It is well-known because it can encode high-dimensional data into a limited-qubit quantum circuit, and the higher the dimensionality, the stronger its ability to fit data. We are curious: how good is this model's predictive performance? Our theoretical analysis reveals a surprising fact: in quantum circuits with limited qubits, once the dimensionality of encoded data becomes too high, the predictive performance of data re-uploadings models approaches random guessing. This is determined by the model structure and is independent of the optimization method. No matter how well the model performs on the training set, predictions will be approaches to random guessing for most data. Our research theoretically demonstrates the infeasibility of deep data re-uploading models and provides guidance for future quantum machine learning model research. Perhaps wider data re-uploading models would be more effective.
Link To Code: https://github.com/SheffieldWang/deep-quantum-data-reuploading
Primary Area: General Machine Learning->Supervised Learning
Keywords: Quantum machine learning, Data re-uploading, Predictive performance
Submission Number: 11225
Loading