Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models

Published: 11 Jun 2025, Last Modified: 10 Jul 2025ES-FoMo IIIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inference Optimization, Adaptive Computation, LLM Reasoning, CoT
TL;DR: We introduce Predictive Scheduling, a plug-and-play framework that uses LLM hidden states or lightweight classifiers to estimate per-query reasoning needs and adaptively allocate token budgets at inference time.
Abstract: Large language models (LLMs) achieve state-of-the-art accuracy on complex reasoning tasks by generating multiple chain-of-thought (CoT) traces, but using a fixed token budget per query leads to over-computation on easy inputs and under-computation on hard ones. We introduce Predictive Scheduling, a plug-and-play framework that pre-runs lightweight predictors—an MLP on intermediate transformer hidden states or a LoRA-fine-tuned classifier on raw question text—to estimate each query’s optimal reasoning length or difficulty before any full generation. Our greedy batch allocator dynamically distributes a fixed total token budget across queries to maximize expected accuracy. On the GSM8K arithmetic benchmark, predictive scheduling yields up to 7.9 percentage points of absolute accuracy gain over uniform budgeting at identical token cost, closing over 50% of the gap to an oracle with perfect foresight. A systematic layer-wise study reveals that middle layers (12–17) of the transformer carry the richest signals for size estimation. These results demonstrate that pre-run budget prediction enables fine-grained control of the compute–accuracy trade-off, offering a concrete path toward latency-sensitive, cost-efficient LLM deployments.
Submission Number: 14
Loading