GSAE: Graph-Regularized Sparse Autoencoders for Robust LLM Safety Steering

ICLR 2026 Conference Submission19974 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Safety, Representation Learning, Sparse Autoencoders, Activation Steering, Graph Regularization, Jailbreaking Robustness
TL;DR: We propose Graph-Regularized Sparse Autoencoders (GSAE), which use Laplacian smoothness on neuron co-activation graphs to learn distributed safety features, achieving higher selective refusal, ≥90% jailbreak robustness, and strong QA accuracy.
Abstract: Large language models (LLMs) face critical safety challenges, as they can be manipulated to generate harmful content through adversarial prompts and jailbreak attacks. Existing defenses are typically either black-box guardrails that filter outputs, or internals-based methods that steer hidden activations by operationalizing safety as a single latent feature. While effective for simple concepts, this assumption is limiting, as recent evidence shows that abstract concepts such as refusal and temporality are distributed across multiple features rather than isolated in one. To address this limitation, we introduce Graph-Regularized Sparse Autoencoders (GSAEs), which extends SAEs with a Laplacian smoothness penalty on the neuron co-activation graph. Unlike standard SAEs that assign each concept to a single latent feature, GSAEs recover smooth, distributed safety representations as coherent patterns spanning multiple features. We empirically demonstrate that GSAE enables effective runtime safety steering, assembling features into a weighted set of safety-relevant directions and controlling them with a two-stage gating mechanism that activates interventions only when harmful prompts or continuations are detected during generation. This approach enforces refusals adaptively while preserving utility on benign queries. Across safety and QA benchmarks, GSAE steering achieves an average 82\% selective refusal rate, substantially outperforming standard SAE steering (42\%), while maintaining strong task accuracy (70\% on TriviaQA, 65\% on TruthfulQA, 74\% on GSM8K). Robustness experiments further show generalization across LLaMA-3, Mistral, Qwen, and Phi families and resilience against jailbreak attacks (GCG, AutoDAN), consistently maintaining $\geq$90\% refusal of harmful content.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 19974
Loading