A Simple Model of Inference Scaling Laws

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We provide a simple predictive model for the functional behavior of inference with increasing number of attempts.
Abstract: Neural scaling laws have garnered significant interest due to their ability to predict model performance as a function of increasing parameters, data, and compute. In this work, we propose a simple statistical ansatz based on memorization to study scaling laws in the context of inference. Specifically, how performance improves with multiple inference attempts. We explore the coverage, or pass@k metric, which measures the chance of success over repeated attempts and provide a motivation for the observed functional form of the inference scaling behavior of the coverage in large language models (LLMs) on reasoning tasks. We then define an "inference loss", which exhibits a power law decay as the number of trials increases, and connect this result with prompting costs. We further test the universality of our construction by conducting experiments on a simple generative model, and find that our predictions are in agreement with the empirical coverage curves in a controlled setting. Our simple framework sets the ground for incorporating inference scaling with other known scaling laws.
Lay Summary: Imagine an Artificial Intelligence, analogous to a student, trying to solve a complex problem that requires reasoning, such as maths or coding tasks. Sometimes the first try isn't right, but more attempts can lead to success. Our research explores how much better AI models get at tasks like coding or math simply by making multiple attempts – a concept we call 'inference scaling.' We've developed a simple mathematical idea, based on how well a model has 'memorized' information, to predict this improvement. This model helps us understand the chances of getting a correct answer after several tries (known as 'pass@k'). It shows that performance increases in a predictable way, depending on how many 'easy' versus 'hard' aspects a problem presents to the AI. Our findings match real-world observations with large language models and even simpler systems, offering a new way to think about making AI more effective and efficient without just making the models bigger or training them longer.
Primary Area: Deep Learning->Theory
Keywords: Scaling Laws, Inference Scaling, Maths, Coding, LLMs.
Submission Number: 1287
Loading