Robustness of Classical and Quantum Inspired Architectures Against Structured Corruptions in Vision Tasks
Keywords: Quantum Machine Learning, Robustness, Quantum Convolutional Neural Network, Quantum Multi Head Attention, Computer Vision
TL;DR: Quantum-inspired models (QCNN, QMHA) outperform CNNs in robustness to input corruptions on CIFAR-10 without retraining, suggesting quantum architectural biases enhance real-world vision model resilience
Abstract: Robust performance under distributional shifts and noisy inputs is critical for real world deployment of machine learning models. While Convolutional Neural Networks (CNNs) remain the foundation of vision models, they are notoriously sensitive to corruptions in the input data, such as sensor noise, partial occlusions, or bit level errors. Motivated by the growing interest in quantum machine learning, we investigate whether quantum inspired architectural inductive biases can confer greater resilience to such perturbations. We conduct a systematic evaluation of three neural architectures Classical CNN, Quantum Convolutional Neural Network (QCNN), and Quantum Multi Head Attention (QMHA) on the CIFAR 10 dataset under a diverse set of corruption regimes. Specifically, we test inference time robustness against Gaussian noise, salt and pepper corruption, Fourier masking, stripe noise, block masking, and bit flip noise, without any retraining or augmentation. Our results demonstrate that while CNNs degrade significantly under most corruptions, quantum inspired architectures, particularly QMHA, exhibit improved robustness in multiple scenarios. These findings highlight the potential of quantum informed designs in developing resilient vision models and suggest promising directions for future hybrid quantum classical architectures in real world deployment settings.
If accepted, I would like to present this as a full paper.
Submission Number: 15
Loading