QC-BERT: A Quantum-Classical hybrid framework for Efficient Sentiment Analysis and Question Answering
Abstract: Transformers have revolutionized NLP but are constrained by their massive parameter counts, posing challenges for edge deployment. Quantum computing, leveraging superposition and entanglement, promises exponential efficiency gains, yet practical, scalable QNLP applications remain scarce. In this pioneering work, we propose QuantumDistilBERT (ours) and HybridTinyBERTQC (ours), the first scalable, hybrid quantum-classical transformer models designed for both core NLP tasks and resource-constrained environments. QuantumDistilBERT achieves 91.36% accuracy on IMDB—just 1.46% below DistilBERT—while reducing trainable parameters by 89.4%, demonstrating strong edge applicability.HybridTinyBERTQC, enhanced with quantum self-attention mechanisms, achieves 82.31% F1 and 73.10% EM on SQuAD 1.1, and 32.86% F1 on Adversarial QA, outperforming TinyBERT (undistilled on task-specific datasets) by over 1% (p < 0.05) on SQuAD and 3.55% on AQA. A novel complexity scoring mechanism reduces quantum circuit overhead by 20%, generalizing well to other text classification tasks. Notably, our hybrid model exhibits a 41.3% reduction in loss variance (0.1329 vs. 0.2265) and uniquely achieves perfect reproducibility across runs with the same random seed—producing identical metrics and loss values every time. This unprecedented consistency underscores the model’s reliability, a critical requirement for edge deployment. Extensive evaluations on IMDB, SQuAD, Adversarial QA, and SST-2 demonstrate the scalability and robustness of our approach. While quantum noise in NISQ hardware still limits subjective task performance, our work lays foundational groundwork for practical, reproducible, and deployable QNLP systems on edge devices
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Colin_Raffel1
Submission Number: 4885
Loading