Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models

Published: 20 Jun 2023, Last Modified: 29 Jun 2023ToM 2023EveryoneRevisionsBibTeX
Keywords: Higher Order Theory of Mind, Chain of Thought Prompting, Large Language Models
TL;DR: We introduce a benchmark called ToMh for evaluating higher-order ToM reasoning in LLMs, and do experimental evaluation on GPT-4.
Abstract: Theory of Mind (ToM) is the ability to understand and reason about one's own and others' mental states, which plays a critical role in the development of intelligence, language understanding, and cognitive processes. While existing work has primarily focused on first and second-order ToM, we explore higher-order ToM, which involves recursive reasoning on others' beliefs. We introduce Hi-ToM, a Higher Order Theory of Mind benchmark. Our experimental evaluation using GPT-4 reveals a decline in performance on higher-order ToM tasks, indicating the limitations of current models. This highlights the challenges of reasoning in complex ToM scenarios and emphasizes the need for further advancements in large language models' higher-order ToM capabilities.
Supplementary Material: pdf
Submission Number: 31
Loading