Abstract: Deep neural networks (DNNs) have progressed rapidly in recent years and are increasingly deployed in real-world applications. They are now integral to critical tasks, such as traffic sign recognition in autonomous vehicles, where DNNs have become the primary method for handling most of the processing. However, many DNNs are known to be vulnerable to adversarial attacks-small but deliberately crafted perturbations applied to input data. Such perturbations can easily cause misclassification, posing significant risks, especially in autonomous vehicle systems. In this paper, we present a novel approach called multi-resolution training, which utilizes lower-resolution information from input images to retain essential features while partially filtering out adversarial attacks. Our method involves designing convolutional neural network (CNN) layers that apply various downsampling techniques with custom-designed filters, followed by upsampling to restore the resolution for further network processing. This approach has been tested on multiple DNNs, and results show that it effectively enhances the robustness of DNNs against adversarial attacks.
External IDs:dblp:journals/sivp/RongHC25
Loading