Keywords: factuality, hallucinations, benchmarking, interpretability
Abstract: Large language models (LLMs) can correctly answer "When was Einstein born?" yet fail to provide the same date when the same question is embedded in a long-form query requesting multiple facts about Einstein's life, revealing a fundamental inconsistency in how models access factual knowledge across task complexities. While models display impressive accuracy on factual question-answering benchmarks, the reliability gap between simple and complex long-form queries remains poorly understood, eroding their trustworthiness. In this work, we introduce Short-Long Form Alignment for Factual Question Answering (SLAQ, a controlled evaluation framework that compares LLMs' answers to the same factual questions asked (a) in isolation (short) vs. (b) integrated into complex queries (long). Looking at 16 LLMs across 600 queries, we find a systematic misalignment of answers to the corresponding short and long queries. We further uncover momentum effects where consecutive correct or incorrect answers create self-reinforcing patterns. Through mechanistic analysis, we find that aligned facts activate overlapping model internals, and that metrics based on mechanistic similarity can predict short-long answer alignment with up to 80% accuracy. Our work establishes factual consistency over query complexity as an important aspect of LLMs' trustworthiness and challenges current evaluation practices, which implicitly assume that good performance for simple factual queries implies reliability in more complex knowledge-seeking tasks too.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: factuality, benchmarking, interpretability
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 5274
Loading