clembench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Theme Track: Large Language Models and the Future of NLP
Submission Track 2: Dialogue and Interactive Systems
Keywords: large language models; evaluation; dialogue; dialogue games; interaction
TL;DR: An implemented method for evaluating LLMs as conversational agents through self-play of interactive games; and an analysis of results.
Abstract: Recent work has proposed a methodology for the systematic evaluation of "Situated Language Understanding Agents" --- agents that operate in rich linguistic and non-linguistic contexts --- through testing them in carefully constructed interactive settings. Other recent work has argued that Large Language Models (LLMs), if suitably set up, can be understood as (simulators of) such agents. A connection suggests itself, which this paper explores: Can LLMs be evaluated meaningfully by exposing them to constrained game-like settings that are built to challenge specific capabilities? As a proof of concept, this paper investigates five interaction settings, showing that current chat-optimised LLMs are, to an extent, capable of following game-play instructions. Both this capability and the quality of the game play, measured by how well the objectives of the different games are met, follows the development cycle, with newer models generally performing better. The metrics even for the comparatively simple example games are far from being saturated, suggesting that the proposed instrument will remain to have diagnostic value.
Submission Number: 3023
Loading