Read the Scene, Not the Script: Outcome-Aware Safety for LLMs

Published: 08 Nov 2025, Last Modified: 27 Nov 2025ResponsibleFM @ NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Safety Alignment
TL;DR: This paper reveals and mitigates consequence-blindness in large language models through a new benchmark (CB-Bench) and a consequence-aware training dataset (CS-Chain).
Abstract: Safety-aligned Large Language Models (LLMs) still show two dominant failure modes: they are easily jailbroken, or they over-refuse harmless inputs that contain sensitive surface signals. We trace both to a common cause: current models reason weakly about links between actions and outcomes and over-rely on \emph{surface-form signals}, lexical or stylistic cues that do not encode consequences. We define this failure mode as \textbf{\textit{Consequence-blindness}}. To study consequence-blindness, we build a benchmark named \textbf{\textit{CB-Bench}} (\textbf{\textit{\underline{C}}}onsequence-\textbf{\textit{\underline{B}}}lindness \textbf{\textit{\underline{Bench}}}mark) covering four risk scenarios that vary whether semantic risk aligns with outcome risk, enabling evaluation under both matched and mismatched conditions which are often ignored by existing safety benchmarks. Mainstream models consistently fail to separate these risks and exhibit consequence-blindness, indicating that consequence-blindness is widespread and systematic. To mitigate consequence-blindness, we introduce \textbf{\textit{CS-Chain-4k}} (\textbf{\textit{\underline C}}on\textbf{\textit{\underline{S}}}equence \textbf{\textit{\underline{Chain}}}), a consequence-reasoning dataset for safety alignment. Models fine-tuned on \csch show clear gains against semantic-camouflage jailbreaks and reduce over-refusal on harmless inputs, while maintaining utility and generalization on other benchmarks. These results clarify the limits of current alignment, establish consequence-aware reasoning as a core alignment goal and provide a more practical and reproducible evaluation path.
Submission Number: 66
Loading