Looking beyond the next token

Published: 10 Jun 2025, Last Modified: 10 Jun 2025LCFM 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Long horizon generation, planning, reasoning
TL;DR: Don't change your architecture; change your data distribution - Interleaving future sub-goals in the training distribution improves implicit long horizon planning.
Abstract: The structure of causal language model training assumes that each token can be accurately predicted from the previous context. This contrasts with humans' natural writing and reasoning process, where goals are typically known before the exact argument or phrasings. While this mismatch has been well studied in the literature, the working assumption has been that architectural changes are needed to address this mismatch. We argue that rearranging and processing the training data sequences can allow models to more accurately imitate the true data-generating process, and does not require any other changes to the architecture or training infrastructure. We demonstrate that this technique Trelawney and the inference algorithms derived from it allow us to improve performance on benchmarks that span planning, algorithmic reasoning, and story generation tasks. Finally, our method naturally enables the generation of long-term goals at no additional cost. We investigate how using the model’s goal-generation capability can further improve long horizon planning and reasoning.
Submission Number: 29
Loading