Efficacy of Language Model Self-Play in Non-Zero-Sum Games

Published: 30 Oct 2024, Last Modified: 13 Dec 2024LanGame SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: language models, self-play, multi-agent, dialogue, reasoning
TL;DR: We analyze the performance of language model self-play in competitive and cooperative games.
Abstract: Game-playing agents like AlphaGo have achieved superhuman performance through self-play, which is theoretically guaranteed to yield optimal policies in competitive games. However, most language tasks are partially or fully cooperative, so it is an open question whether techniques like self-play can effectively be used to improve language models. We empirically investigate this question in a negotiation game setting known as Deal or No Deal (DoND). Crucially, the objective in DoND can be modified to produce a fully cooperative game, a strictly competitive one, or anything in between. We finetune language models in self-play over multiple rounds of filtered behavior cloning in DoND for each of these objectives and evaluate them in self-play and in collaboration with humans. We find that language models improve substantially in self-play, achieving 14-17× higher scores in task reward after finetuning. Further, the trained models generalize to both cooperation and competition with humans, scoring 2.5-6× higher than base models. We view these results as an early promising sign for language model self-play in cooperative settings, despite a lack of theoretical guarantees.
Submission Number: 36
Loading