Keywords: Theory of Mind, Large Language Model
Abstract: Theory of Mind (ToM) assesses whether models can infer hidden mental states such as beliefs, desires, and intentions, which is essential for natural social interaction. Although recent progress in Large Reasoning Models (LRMs) has boosted step-by-step inference in mathematics and coding, it is still underexplored whether this benefit transfers to socio-cognitive skills. We present a systematic study of 11 advanced Large Language Models (LLMs), comparing reasoning models with non-reasoning models on three representative ToM benchmarks. The results show that reasoning models do not consistently outperform base models and sometimes perform worse. A fine-grained analysis reveals two main failure reasons. First, slow thinking collapse: accuracy significantly drops as responses grow longer, and larger reasoning budgets hurt performance. Second, option matching shortcut: when multiple choice options are removed, reasoning models improve markedly, indicating reliance on option matching rather than genuine deduction. These results highlight the advancement of LRMs in formal reasoning (e.g., math, code) cannot be transferred to ToM, a typical task in social reasoning. We conclude that achieving robust ToM requires developing unique capabilities beyond existing reasoning methods and we provide a preliminary exploration of such an approach with a combination of Slow-to-Fast (S2F) adaptive reasoning and Think-to-Match (T2M) shortcut prevention.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 16006
Loading