Evaluation of Large Language Models via Coupled Token Generation
Abstract: State-of-the-art large language models rely on randomization to respond to a prompt. Consequently, a model may respond differently to the same prompt if asked multiple times. In this work, we argue that the evaluation and ranking of large language models should control for this randomization. Our starting point is the development of a causal model for coupled autoregressive generation, which allows different large language models to sample responses with the same source of randomness. Building upon our causal model, we first show that, on evaluations based on benchmark datasets, coupled autoregressive generation leads to the same conclusions as vanilla autoregressive generation but using provably fewer samples. However, we further show that, on evaluations based on pairwise comparisons, the two approaches can surprisingly lead to different rankings when comparing more than two models. This suggests that the apparent advantage of a model over others in existing evaluation protocols may not be genuine, but rather confounded by the randomness inherent to the generation process. To complement our theoretical results, we conduct experiments with several models from the Llama, Mistral and Qwen families. We find that, across multiple benchmark datasets, coupled autoregressive generation requires up to 75% fewer samples to reach the same conclusions as vanilla autoregressive generation. Further, we find that the win-rates derived from pairwise comparisons by a strong large language model to prompts from the LMSYS Chatbot Arena platform differ under coupled and vanilla autoregressive generation.
Submission Number: 959
Loading