Keywords: theory of mind, large language models, social cognition, distributional information
TL;DR: We compare GPT-3 to human comprehenders across 6 Theory of Mind Tasks; it achieves parity on 3 but lags behind on others.
Abstract: We address a growing debate about the extent to which large language models (LLMs) produce behavior consistent with Theory of Mind (ToM) in humans. We present EPITOME: a battery of six experiments that tap diverse ToM capacities, including belief attribution, emotional inference, and pragmatic reasoning. We compare performance of five LLMs to a baseline of responses from human comprehenders.
Results are mixed. LLMs display considerable sensitivity to mental states and match human performance in several tasks. Yet, they commit systematic errors in others, especially those requiring pragmatic reasoning on the basis of mental state information. Such uneven performance indicates that attributing ToM to LLMs might be premature.
Submission Number: 30
Loading