Detecting (Un)answerability in Large Language Models with Linear Directions

ACL ARR 2025 July Submission401 Authors

27 Jul 2025 (modified: 04 Sept 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) often respond confidently to questions even when they lack the necessary information, leading to hallucinated answers. In this work, we study the problem of (un)answerability detection in extractive question answering (QA), where the model should determine if a passage contains sufficient information to answer a given question. We propose a simple approach that identifies a direction in the model’s activation space that captures unanswerability and uses it for classification. This direction is selected by applying activation additions during inference and measuring their impact on the model’s abstention behavior. We show that projecting hidden activations onto this direction yields a reliable score for (un)answerability classification. Experiments on two open-weight LLMs and four QA benchmarks show that our method effectively detects unanswerable questions and generalizes better across datasets than existing prompt-based and classifier-based approaches. Causal interventions reveal that adding the direction increases abstention, while ablating it suppresses it, further indicating that it captures an unanswerability signal.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Interpretability and Analysis of Models for NLP, Question Answering
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 401
Loading