Improving Language Modeling by Increasing Test-time Planning Compute

Published: 06 Oct 2024, Last Modified: 12 Nov 2024WiNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: language modeling, test-time compute, inference
TL;DR: We condition an LM on multiple planner-predicted sequences of abstract actions.
Abstract: Modern language models predict the next token in the sequence by considering the past text through a powerful function. However, language models have no explicit mechanism that allows them to spend computation time for planning long-distance future text, leading to a suboptimal token prediction. In this paper, we propose a planner that predicts a latent plan for many sentences into the future. By sampling multiple plans at once, we condition the language model on an accurate approximation of the distribution of text continuations, which leads to better next token prediction accuracy. In effect, this allows trading computation time for prediction accuracy.
Submission Number: 54
Loading