Preserving Real-World Robustness of Neural Networks Under Sparsity Constraints

Published: 01 Jan 2024, Last Modified: 15 Apr 2025ECML/PKDD (5) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Successful deployment of deep neural networks in physical applications requires various resource constraints and real-world robustness considerations to be simultaneously satisfied. While larger models have been shown to inherently yield robustness, they also come with massive demands in computational power, energy, or memory consumption, which renders them unsuitable to be applied on resource-constrained embedded devices. Our work focuses on practical real-world robustness properties of neural networks under such limitations, particularly with memory-related sparsity constraints. We overcome both challenges by efficiently incorporating state-of-the-art data augmentation methods within the model compression pipeline to maintain robustness. We empirically evaluate various dense models and their pruned counterparts on a comprehensive set of real-world robustness evaluation metrics, including out-of-distribution generalization and resilience against universal adversarial patch attacks. We show that implementing data augmentation strategies only during the pruning and finetuning phases is more critical for robustness of networks under sparsity constraints, than aiming for robustness in pre-training overparameterized dense models in the first place. Results demonstrate that our sparse models obtained via data augmentation driven pruning can even outperform dense models that are end-to-end trained with exhaustive data augmentation.
Loading