Scaling Trends in Language Model Robustness

Published: 01 May 2025, Last Modified: 15 Aug 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We study scaling trends for offense, defense, and offense/defense balance in the context of language model adversarial robustness.
Abstract: Increasing model size has unlocked a dazzling array of capabilities in language models. At the same time, even frontier models remain vulnerable to jailbreaks and prompt injections, despite concerted efforts to make them robust. As both attackers and defenders gain access to more compute, and as models become larger, what will be the effect on robustness? We argue that to answer this question requires a *scaling lens*, which we adopt in an extensive study of language model robustness across several classification tasks, model families, and adversarial attacks. We find that in the absence of explicit safety training, larger models are not consistently more robust; however, scale improves sample efficiency in adversarial training, though it worsens compute efficiency. Further, we find that increasing attack compute smoothly improves attack success rate against both undefended and adversarially trained models. Finally, after exploring robustness transfer across attacks and threat models, we combine attack and defense scaling rates to study the offense-defense balance. We find that while attack scaling outpaces adversarial training across all models studied, larger adversarially trained models might give defense the advantage in the long run. These results underscore the utility of the scaling lens, and provide a paradigm for evaluating future attacks and defenses on frontier models. Code for this project is available at https://github.com/AlignmentResearch/scaling-llm-robustness-paper.
Lay Summary: Previous research has shown that one can reliably improve the performance of LLMs like ChatGPT and Claude by using a larger underlying model, training on larger datasets, and training for longer. Despite this recipe for success---which had led to an explosion of capabilities by these frontier models---even the best models can adversarially attacked, that is, be tricked into doing things they shouldn't, like providing misinformation or advising on how to build weapons. We wanted to find a way to predict if this vulnerability to adversarial attack will still exist in the future, when both defender and attacker have access to more compute (so defender can train larger models and for longer, but attacker can also attack harder). In this work, we lay the groundwork for such an approach, and showcase it by studying six tasks and three attacks, against model families with models ranging from 7 million to 14 billion parameters (about 28 MB to 56 GB in size). We find that in general, using a larger model does not automatically make the model robust. However, doing some adversarial training, that is, training the model on examples of the attack, does reliably improve robustness. On the attack side of the equation, increasing attack strength reliably improves attack success rate, regardless of whether the model being attacked has undergone safety training or not. Putting the results together, we show that, for all model sizes studied, attacker has the advantage. However, for significantly larger models, the trend suggests that defender might ultimately have the advantage.
Link To Code: https://github.com/AlignmentResearch/scaling-llm-robustness-paper
Primary Area: Social Aspects->Safety
Keywords: ai safety, language models, scaling laws, adversarial attacks, adversarial training, robustness
Submission Number: 8317
Loading