Keywords: LLM, Game Theory
TL;DR: We utilize game theory techniques to improve worst case performance of LLM information seeking ability.
Abstract: Large Language Models (LLMs) are increasingly deployed in real-world settings where key task information is missing, making active information seeking essential. Many existing approaches impose simplifying assumptions that can degrade worst-case performance, which is problematic in high-stakes applications.
In this work, we use the game of Twenty Questions to evaluate the information-seeking ability of LLMs. We introduce and formalize its adversarial counterpart, the Strategic Language Search (SLS) problem along with its variants as a two-player zero-sum extensive form game. We propose Game of Thought (GoT), a framework that applies game-theoretic techniques to approximate a Nash equilibrium (NE) strategy for the restricted variant of the game. Across all evaluated settings, GoT consistently improves worst-case performance over pure prompting and heuristic-guided search baselines.
Track: Long Paper
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 69
Loading