Track: tiny / short paper (up to 5 pages)
Keywords: AI safety, alignment, continuous thought, latent reasoning, chain-of-thought, interpretability, linear probes, backdoors, deceptive alignment, misalignment detection
TL;DR: We show continuous thought models can harbor misaligned reasoning in latent space while producing aligned outputs, and that linear probes targeting early latent tokens can detect these hidden armed states before harmful behavior is expressed.
Abstract: Chain-of-Thought (CoT) reasoning has emerged as a key technique for eliciting complex reasoning in Large Language Models (LLMs). Although interpretable, its dependence on natural language limits the model's expressive bandwidth. Continuous thought models address this bottleneck by reasoning in latent space rather than human-readable tokens. While they enable richer representations and faster inference, they raise a critical safety question: how can we detect misaligned reasoning in an uninterpretable latent space? To study this, we introduce MoralChain, a benchmark of 12,000 social scenarios with parallel moral/immoral reasoning paths. We train a continuous thought model with backdoor behavior using a novel dual-trigger paradigm: one trigger that arms misaligned latent reasoning ($\texttt{[T]}$) and another that releases harmful outputs ($\texttt{[O]}$). We demonstrate three findings: (1) continuous thought models can exhibit misaligned latent reasoning while producing aligned outputs, with aligned and misaligned reasoning occupying geometrically distinct regions of latent space; (2) linear probes trained on behaviorally-distinguishable conditions ($\texttt{[T][O]}$ vs $\texttt{[O]}$) transfer to detecting armed-but-benign states ($\texttt{[T]}$ vs baseline) with high accuracy; and (3) misalignment is encoded in early latent thinking tokens, suggesting safety monitoring for continuous thought models should target the "planning" phase of latent reasoning.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 53
Loading