Maximizing Feature Distribution Variance for Robust Neural Networks

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The security of Deep Neural Networks (DNNs) has proven to be critical for their applicabilities in real-world scenarios. However, DNNs are well-known to be vulnerable against adversarial attacks, such as adding artificially designed imperceptible magnitude perturbation to the benign input. Therefore, adversarial robustness is essential for DNNs to defend against malicious attacks. Stochastic Neural Networks (SNNs) have recently shown effective performance on enhancing adversarial robustness by injecting uncertainty into models. Nevertheless, existing SNNs are still limited for adversarial defense, as their insufficient representation capability from the fixed uncertainty. In this paper, to elevate feature representation capability of SNNs, we propose a novel yet practical stochastic neural network that maximizes feature distribution variance (MFDV-SNN). In addition, we provide theoretical insights to support the adversarial resistance of MFDV, which primarily derived from the stochastic noise we injected into DNNs. Our research demonstrates that by gradually increasing the level of stochastic noise in a DNN, the model naturally becomes more resistant to input perturbations. Since adversarial training is not required, MFDV-SNN does not compromise clean data accuracy and saves up to 7.5 times computation time. Extensive experiments on various attacks demonstrate that MFDV-SNN improves adversarial robustness significantly compared to other methods.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Experience] Interactions and Quality of Experience
Relevance To Conference: This work contributes to multimedia and multimodal processing by enhancing the security and reliability of Deep Neural Networks (DNNs) applied in these areas. Multimedia and multimodal systems, which process and integrate different types of data such as images, video, and audio, are increasingly dependent on DNNs for tasks like object recognition, speech recognition, and scene understanding. The introduction of the proposed Maximizing Feature Distribution Variance Stochastic Neural Network (MFDV-SNN) directly addresses a critical vulnerability of these systems: their susceptibility to adversarial attacks, where slight, often imperceptible modifications to the input data can lead to incorrect outputs. By injecting stochastic noise to increase the variance in feature distribution, MFDV-SNN not only enhances the robustness of DNNs against such adversarial manipulations but also maintains, or even improves, the accuracy on clean data. This approach ensures that multimedia and multimodal systems are more secure and reliable, making them better suited for real-world applications where data integrity and system trustworthiness are paramount. Furthermore, the reduced need for adversarial training and the consequent computational efficiency make MFDV-SNN a practical solution for enhancing the adversarial robustness of systems dealing with diverse and complex data types inherent in multimedia and multimodal processing.
Supplementary Material: zip
Submission Number: 3523
Loading