Model Organisms for Emergent Misalignment

Published: 01 Jul 2025, Last Modified: 10 Jul 2025ICML 2025 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretability, Misalignment, LLMs, Fine-tuning
TL;DR: We present novel datasets which induce coherent emergent misalignment across diverse model families, sizes and training protocol, and further demonstrate the presence of a phase transition in the learning of general misalignment.
Abstract: Recent work discovered Emergent Misalignment (EM): fine-tuning large language models on narrowly harmful datasets can lead them to become broadly misaligned. A survey of experts prior to publication revealed that this was highly unexpected, demonstrating critical gaps in our understanding of model alignment. In this work, we advance understanding of this phenomena and provide tools for future research. Using new narrowly misaligned datasets, we create improved model organisms that achieve 99\% coherence (vs. 67\% prior), work with smaller 0.5B parameter models (vs. 32B), and can induce misalignment using just a single rank-1 LoRA adapter. We demonstrate that EM occurs robustly across diverse model sizes, three model families, and numerous training protocols including full supervised fine-tuning. Leveraging these cleaner model organisms, we isolate a phase change that corresponds to learning the necessary directions to induce misalignment. Aligning large language models is critical for frontier AI safety, yet EM exposes how far we are from achieving this robustly. By distilling clean model organisms that isolate a minimal alignment-compromising change, and where this is learnt, we establish a foundation for future research into understanding and mitigating alignment risks in LLMs.
Submission Number: 133
Loading