Transformers as Game Players: Provable In-context Game-playing Capabilities of Pre-trained Models

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: In-context Learning; Multi-agent Competitive Games; Transformers; Decision-making
TL;DR: This work provides a theoretical understanding of the in-context game-playing capabilities of pre-trained transformers, broadening the research scope of in-context RL from the single-agent scenario to the multi-agent competitive games.
Abstract: The in-context learning (ICL) capability of pre-trained models based on the transformer architecture has received growing interest in recent years. While theoretical understanding has been obtained for ICL in reinforcement learning (RL), the previous results are largely confined to the single-agent setting. This work proposes to further explore the in-context learning capabilities of pre-trained transformer models in competitive multi-agent games, i.e., in-context game-playing (ICGP). Focusing on the classical two-player zero-sum games, theoretical guarantees are provided to demonstrate that pre-trained transformers can provably learn to approximate Nash equilibrium in an in-context manner for both decentralized and centralized learning settings. As a key part of the proof, constructional results are established to demonstrate that the transformer architecture is sufficiently rich to realize celebrated multi-agent game-playing algorithms, in particular, decentralized V-learning and centralized VI-ULCB.
Primary Area: Reinforcement learning
Submission Number: 13525
Loading