TL;DR: SMAAT provides an explanation for the discrepancy in robustness and generalization trends between vision and language models. Using it, SMAAT offers better robustness for encoder-based LLMs by using 25-33% less GPU time compared to standard AT.
Abstract: Adversarial Training (AT) impacts different architectures in distinct ways: vision models gain robustness but face reduced generalization, encoder-based models exhibit limited robustness improvements with minimal generalization loss, and recent work in latent-space adversarial training demonstrates that decoder-based models achieve improved robustness by applying AT across multiple layers.
We provide the first explanation for these trends by leveraging the manifold conjecture: off-manifold adversarial examples (AEs) enhance robustness, while on-manifold AEs improve generalization.
We show that vision and decoder-based models exhibit low intrinsic dimensionality in earlier layers (favoring off-manifold AEs), whereas encoder-based models do so in later layers (favoring on-manifold AEs).
Exploiting this property, we introduce SMAAT, which improves the scalability of AT for encoder-based models by perturbing the layer with the lowest intrinsic dimensionality. This reduces the projected gradient descent (PGD) chain length required for AE generation, cutting GPU time by 25–33% while significantly boosting robustness. We validate SMAAT across multiple tasks, including text generation, sentiment classification, safety filtering, and retrieval augmented generation setups, demonstrating superior robustness with comparable generalization to standard training.
Lay Summary: Adversarial training is a popular method for making AI models more robust to small, carefully crafted changes in input data that can fool the model. However, while it works well for vision models, it often reduces their accuracy, whereas it surprisingly improves both robustness and accuracy in some language models. In this work, we uncover a key reason behind this difference: the complexity of the “data manifold”, the space that the model believes the data lives in, varies across models. We find that adversarial examples in vision and decoder-based models tend to fall off this manifold, which increases robustness but harms accuracy. In contrast, encoder-based language models see more on-manifold adversarial examples, which improve accuracy. Based on this insight, we propose SMAAT, a new method that selectively applies adversarial training to the most effective layer in the network. This allows us to make language models more robust while using significantly less compute. Our method achieves state-of-the-art results in tasks like classification, safety filtering, and document retrieval, while using just a fraction of the GPU time compared to standard methods. These findings may help build more robust and efficient AI systems across domains.
Link To Code: https://github.com/EnesAltinisik/SMAAT-25/tree/main
Primary Area: Deep Learning->Robustness
Keywords: adversarial training, robustness, data manifold
Submission Number: 10556
Loading