TL;DR: By analyzing the behavior of various LLMs across different temperatures and tasks, we propose an entropy-based algorithm for automated temperature optimization in multi-sample aggregation strategies, eliminating the need for labeled validation data.
Abstract: Multi-sample aggregation strategies, such as majority voting and best-of-N sampling, are widely used in contemporary large language models (LLMs) to enhance predictive accuracy across various tasks. A key challenge in this process is temperature selection, which significantly impacts model performance. Existing approaches either rely on a fixed default temperature or require labeled validation data for tuning, which are often scarce and difficult to obtain. This paper addresses the challenge of automatically identifying the (near)-optimal temperature for different LLMs using multi-sample aggregation strategies, without relying on task-specific validation data. We provide a comprehensive analysis of temperature’s role in performance optimization, considering variations in model architectures, datasets, task types, model sizes, and predictive accuracy. Furthermore, we propose a novel entropy-based metric for automated temperature optimization, which consistently outperforms fixed-temperature baselines. Additionally, we incorporate a stochastic process model to enhance interpretability, offering deeper insights into the relationship between temperature and model performance.
Lay Summary: This paper introduces a method for automatically selecting an appropriate temperature for different models and tasks under multi-sample aggregation strategies like majority voting and best-of-N, without requiring any labeled data. We find that entropy serves as a strong signal for temperature selection and propose an algorithm, TURN, based on this insight.
Link To Code: https://github.com/StigLidu/TURN
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, Inference Time Compute
Submission Number: 8637
Loading