Latent instruction representation alignment: defending against jailbreaks, backdoors and undesired knowledge in LLMs
Keywords: adversarial robustness, ai safety, jailbreaks, unlearning, llms, adversarial representation learning
Abstract: We address jailbreaks, backdoors, and unlearning for large language models (LLMs). Unlike prior work, which trains LLMs based on their actions when given harmful instructions, our method specifically trains the model to change how it interprets instructions. Our method, Latent Instruction Representation Alignment (LIRA), greatly improves generalization. We further boost generalization through an internally adversarial training algorithm. Our methods block over 99\% of PEZ jailbreak attacks; removes a challenging insecure code backdoor; and achieves optimal forgetting on WMDP cyber with negligible loss of benign capabilities.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 23956
Loading