Code-Space Response Oracles: Generating Interpretable Multi-Agent Policies with Large Language Models
Keywords: Multi-Agent Reinforcement Learning, Game Theory, Large Language Models, Code Synthesis
TL;DR: Our approach modifies the Policy-Space Response Oracles framework to have an LLM generate code policies, trading opaque DeepRL policies for interpretable ones.
Abstract: Recent advances in multi-agent reinforcement learning, particularly Policy-Space Response Oracles (PSRO), have enabled the computation of approximate game-theoretic equilibria in increasingly complex domains. However, these methods are fundamentally limited by their reliance on deep reinforcement learning oracles that produce opaque, 'black-box' neural network policies, making them difficult to interpret, trust, and debug. In this work, we introduce Code-Space Response Oracles (CSRO), a novel framework that addresses this challenge by replacing the RL oracle with a Large Language Model (LLM). CSRO reframes the best response computation as a code generation task, prompting an LLM to generate policies directly as human-readable, programmatic strategies. This approach not only yields inherently interpretable policies but also leverages the LLM's pretrained knowledge to discover complex, human-like strategies. We consider multiple ways to construct and enhance an LLM-based oracle: zero-shot prompting, iterative refinement, and FunSearch, a distributed LLM-based evolutionary system. We demonstrate that CSRO achieves performance competitive with baselines while producing a diverse set of explainable policies. Our work presents a new perspective on multi-agent learning, shifting the focus from optimizing opaque policy parameters to synthesizing interpretable, algorithmic behavior.
Area: Generative and Agentic AI (GAAI)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 1298
Loading