AttentionBreaker: Adaptive Evolutionary Optimization for Unmasking Vulnerabilities in LLMs through Bit-Flip Attacks
Abstract: Large language models (LLMs) have significantly advanced natural language processing (NLP) yet are still susceptible to hardware-based threats, particularly bit-flip attacks (BFAs). Traditional BFA techniques, requiring iterative gradient recalculations after each bit-flip, become computationally prohibitive and lead to memory exhaustion as model size grows, making them impractical for state-of-the-art LLMs. To overcome these limitations, we propose AttentionBreaker, a novel framework for efficient parameter space exploration, incorporating GenBFA, an evolutionary optimization method that identifies the most vulnerable bits in LLMs. Our approach demonstrates unprecedented efficacy—flipping just three bits in the LLaMA3-8B-Instruct model, quantized to 8-bit weights (W8), completely collapses performance, reducing Massive Multitask Language Understanding (MMLU) accuracy from 67.3% to 0% and increasing Wikitext perplexity by a factor of $10^5$. Furthermore, AttentionBreaker circumvents existing defenses against BFAs on transformer-based architectures, exposing a critical security risk. Code is open sourced at: https://anonymous.4open.science/r/attention_breaker-16FF/.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Chao_Chen1
Submission Number: 5228
Loading