AdvQuNN: A Methodology for Analyzing the Adversarial Robustness of Quanvolutional Neural Networks

Published: 01 Jan 2024, Last Modified: 14 Sept 2024QSW 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent advancements in quantum computing have led to the development of hybrid quantum neural networks (HQNNs) that employ a mixed set of quantum layers and classical layers, such as Quanvolutional Neural Networks (QuNNs). While several works have shown security threats of classical neural networks, such as adversarial attacks, their impact on QuNNs is still relatively unexplored. This work tackles this problem by designing AdvQuNN, a specialized methodology to investigate the robustness of HQNNs like QuNNs against adversarial attacks. It employs different types of Ansatzes as parametrized quantum circuits and different types of adversarial attacks. This study aims to rigorously assess the influence of quantum circuit architecture on the resilience of QuNN models, which opens up new pathways for enhancing the robustness of QuNNs and advancing the field of quantum cybersecurity. Our results show that, compared to classical convolutional networks, QuNNs achieve up to 60% higher robustness for the MNIST and 40% for FMNIST datasets.
Loading