From Black Boxes to Transparent Minds: Evaluating and Enhancing the Theory of Mind in Multimodal Large Language Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As large language models evolve, there is growing anticipation that they will emulate human-like Theory of Mind (ToM) to assist with routine tasks. However, existing methods for evaluating machine ToM focus primarily on unimodal models and largely treat these models as black boxes, lacking an interpretative exploration of their internal mechanisms. In response, this study adopts an approach based on internal mechanisms to provide an interpretability-driven assessment of ToM in multimodal large language models (MLLMs). Specifically, we first construct a multimodal ToM test dataset, GridToM, which incorporates diverse belief testing tasks and perceptual information from multiple perspectives. Next, our analysis shows that attention heads in multimodal large models can distinguish cognitive information across perspectives, providing evidence of ToM capabilities. Furthermore, we present a lightweight, training-free approach that significantly enhances the model’s exhibited ToM by adjusting in the direction of the attention head.
Lay Summary: Artificial-intelligence helpers will be far safer and more useful if they can reason about what different people have seen or know—a skill psychologists call Theory of Mind. Existing computer tests for this skill treat AI models like sealed black boxes and work, so they miss how modern systems combine language with images. We created GridToM, a new set of puzzles that mix pictures and words and ask models to predict what each observer would believe. Instead of just scoring answers, we also opened the model’s “mind”: we tracked the internal attention heads that decide where the model looks, and found distinct patterns for each observer’s point of view. That tells us the model is genuinely separating perspectives, not just guessing. Finally, we show a simple, training-free tweak—nudging the model along the relevant attention direction—that makes it even better at these social-reasoning tasks. Our approach offers both a sharper yardstick and a clearer window into how future multimodal AI can understand us.
Link To Code: https://annaisavailable.github.io/GridToM/
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: Multimodal Large Language Models (MLLMs), Theory of Mind (ToM), Interpretability, Attention Mechanisms
Submission Number: 8691
Loading