Bayesian Social Deduction with Graph-Informed Language Models

Published: 16 Oct 2025, Last Modified: 10 Nov 2025NeurIPS 2025 ER WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Social Deduction, Reasoning Models, Neurosymbolic Reasoning, LLM Agents, Bayesian Inference, Factor Graph, Reasoning Scaling
TL;DR: A hybrid, efficient reasoning for social deduction games that outperforms large reasoning models while being faster and requiring less tokens
Abstract: Social reasoning---inferring unobservable beliefs and intentions from partial observations of other agents---remains a challenging task for large language models (LLMs). We evaluate the limits of current reasoning language models in the social deduction game Avalon and find that while the largest models demonstrate strong performance, they require extensive test-time inference and degrade sharply when distilled to smaller, real-time-capable variants. We introduce an efficient hybrid reasoning framework that externalizes belief inference to a structured probabilistic model, while using an LLM for language understanding and interaction. Our approach achieves competitive performance with much larger models in Agent-Agent play and, notably, is the first language agent to defeat human players in a controlled study---achieving a 67% win rate and receiving higher qualitative ratings than both reasoning baselines and human teammates. We release code, models, and a dataset to support future work on social reasoning in LLM agents.
Submission Number: 220
Loading