Turning the Spell Around: Lightweight Alignment Amplification via Rank-One Safety Injection

15 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Alignment, Safety, Refusal
TL;DR: This paper introduces ROSI, a lightweight training-free method that amplifies safety in LLMs without training. The method can also be used realign uncensored LLMs.
Abstract: Safety alignment in Large Language Models (LLMs) often involves mediating internal representations to refuse harmful requests. Recent research has demonstrated that these safety mechanisms can be bypassed by ablating or removing specific representational directions within the model. In this paper, we propose the opposite approach: ***Rank-One Safety Injection (ROSI)***, a white-box method that amplifies a model's safety alignment by permanently steering its activations toward the refusal-mediating subspace. **ROSI** operates as a simple, fine-tuning-free rank-one weight modification applied to all residual stream write matrices. The required safety direction can be computed from a small set of harmful and harmless instruction pairs. We show that **ROSI** consistently increases safety refusal rates - as evaluated by Llama Guard 3 - while preserving the utility of the model on standard benchmarks such as MMLU, HellaSwag, and Arc. Furthermore, we show that **ROSI** can also re-align 'uncensored' models by amplifying their own latent safety directions, demonstrating its utility as an effective last-mile safety procedure. Our results suggest that targeted, interpretable weight steering is a cheap and potent mechanism to improve LLM safety, complementing more resource-intensive fine-tuning paradigms.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 5826
Loading