Belief Engine: Bayesian Memory for Configurable Opinion Dynamics in LLM Agents

Published: 03 Mar 2026, Last Modified: 23 Mar 2026ICLR 2026 Workshop MemAgentsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language model, AI agents, Bayesian belief updating, memory architectures, opinion dynamics, multi-agent debate, belief stability, judgement layer, computational social science
TL;DR: We introduce Belief Engine, a configurable Bayesian memory layer that externalises and updates LLM agents’ beliefs to prevent drift and produce stable, interpretable opinion dynamics in multi-agent debate.
Abstract: Large Language Model (LLM) agents can debate fluently, but they do not reliably maintain beliefs across long interactions. This makes it difficult to use them for opinion-dynamics studies where trajectories must be stable, interpretable, and reproducible. We introduce the Belief Engine, a configurable belief architecture that externalises belief state and updates it from extracted arguments. The engine stores adjudicated evidence in memory and updates a bounded stance score using a simple Bayesian log-odds rule with tunable parameters controlling evidence sensitivity, anchoring, and asymmetric weighting. In controlled two-agent debates across topics, we show that Belief Engine produces stance trajectories that are smoother and more reproducible than LLM-based (Agentic) updating, and that its parameters provide monotonic control over persuadability and resistance. By separating what an agent says from how its beliefs are updated, the framework enables traceable and controllable opinion dynamics in LLM-agent simulations.
Submission Number: 59
Loading