Strategic LLM Decoding through Bayesian Games

Published: 05 Mar 2025, Last Modified: 11 Mar 2025Reasoning and Planning for LLMs @ ICLR2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-agent System, Game Theory, Mechanism Design
TL;DR: Strategic LLM Decoding through Bayesian Games
Abstract:

Large Language Models (LLMs) often produce outputs that -- though plausible -- can lack consistency and reliability, particularly in ambiguous or complex scenarios. Challenges arise from ensuring that outputs align with both factual correctness and human intent. This is problematic in existing approaches that trade improved consistency for lower accuracy. To mitigate these challenges, we propose a novel game-theoretic approach to enhance consistency and reliability during the decoding stage of LLM output generation. Our method models the decoding process as a multistage Bayesian Decoding Game. The strategic decoding process dynamically converges to a consensus on the most reliable outputs without human feedback or additional training. Remarkably, our game design allows smaller models to outperform much larger models through game mechanisms (\textit{e.g.} 78.1 LLaMA13B \textit{vs} 76.6 PaLM540B), as well as integrating various LLM strategies and models, demonstrating the potential of game-theoretic tools to improve the truthfulness and reliability of LLMs.

Submission Number: 177
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview