Concept-Level Explainability for Auditing & Steering LLM Responses

ICLR 2026 Conference Submission21754 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainability, Attribution, Auditing, Alignment, Steering, Safety, Concept, Text Generation, Bias, Sentiment Polarization
TL;DR: ConceptX is an attribution-based XAI method for auditing and steering LLMs by identifying input concepts that influence specific output aspects. This work also introduce steering effectiveness as a novel quality metric for XAI.
Abstract: As large language models (LLMs) become widely deployed, concerns about their safety and alignment grow. An approach to steer LLM behavior, such as mitigating biases or defending against jailbreaks, is to identify which parts of a prompt influence specific aspects of the model’s output. Token-level attribution methods offer a promising solution, but still struggle in text generation, explaining the presence of each token in the output separately, rather than the underlying semantics of the entire LLM response. We introduce ConceptX, a model-agnostic, concept-level explainability method that identifies the concepts, i.e., semantically rich tokens in the prompt, and assigns them importance based on outputs' semantic similarity. Unlike current token-level methods, ConceptX also offers to preserve context integrity through in-place token replacements and supports flexible explanation goals, e.g., gender bias. ConceptX enables both auditing, by uncovering sources of bias, and steering, by modifying prompts to shift the sentiment or reduce the harmfulness of LLM responses, without requiring retraining. Across three LLMs, ConceptX outperforms token-level methods like TokenSHAP in both faithfulness and human alignment. Steering tasks boost sentiment shift by 0.252 versus 0.131 for random edits and lower attack success rates from 0.463 to 0.242, outperforming attribution and paraphrasing baselines. While prompt engineering and self-explaining methods sometimes yield safer responses, ConceptX offers a transparent and faithful alternative for improving LLM safety and alignment. Beyond demonstrating the practical benefits of attribution-based explainability in guiding LLM behavior, this work introduces steering effectiveness as a novel measure of XAI quality.
Supplementary Material: pdf
Primary Area: interpretability and explainable AI
Submission Number: 21754
Loading