SafeConstellations: Mitigating Over-Refusals in LLMs Through Task-Aware Representation Steering

ACL ARR 2026 January Submission6818 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, over-refusal, safety alignment, representation steering, interpretability, mechanistic interpretability, task conditioning, inference-time control, AI Alignment
Abstract: LLMs increasingly exhibit over-refusal behavior, where safety mechanisms cause models to reject benign instructions that seemingly resemble harmful content. This phenomenon diminishes utility in production applications that repeatedly rely on common prompt templates or applications that frequently rely on LLMs for specific tasks (e.g. sentiment analysis, language translation). Through extensive evaluation, we demonstrate that LLMs persist in refusing inputs containing harmful content, even when they are reframed with tasks that have benign intent. Our mechanistic analysis reveals that LLMs follow distinct "constellation" patterns in embedding space as representations traverse layers, with each NLP task maintaining consistent trajectories that shift predictably between refusal and non-refusal cases. We introduce \textit{SafeConstellations}, an inference-time trajectory-shifting approach that tracks task-specific trajectory patterns and guides representations toward non-refusal pathways. By selectively guiding model behavior only on tasks prone to over-refusal, our method reduces over-refusal rates by up to 73\% with minimal impact on utility—offering a principled and conditional approach to mitigating over-refusals.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: over-refusal, safety alignment, interpretability, representation steering, mechanistic analysis, alignment control
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 6818
Loading