A Study on the Optimization of the CNNs for Adversarial Attacks

Published: 01 Jan 2023, Last Modified: 13 Nov 2024ICIIT 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Convolutional Neural Networks (CNNs) have shown high accuracy in image classification tasks, including on popular datasets like MNIST and ImageNet. However, these models can be easily fooled by small perturbations added to the input image. To address this issue, the VOneBlock architecture was proposed, which can be added to the frontend of a CNN-based model to improve its robustness to adversarial attacks. In this paper, we compare the performance of CNN models with and without VOneBlock when fine-tuning on adversarial datasets, and show how the inclusion of VOneBlock affects the model's robustness. Additionally, we investigate how the number of Gabor filter kernels used in VOneBlock affects its performance. Through our experiments, we present an optimal way to enhance the robustness of CNN models to adversarial attacks using VOneBlock. Finally, we evaluate whether the classification model with VOneBlock performs well in classifying real-world attacked images, as well as adversarial attacked images. While VOneBlock was developed to improve robustness to small perturbations, we find that the neural network with VOneBlock performs slightly better in classifying real-world attacked images.
Loading