Keywords: transformer, autoregressive model, multi-token prediction, generative model, large language models
TL;DR: An LLM framework to predict multiple tokens with arbitrary dependencies in a single model call.
Abstract: Autoregressive decoding in language models is inherently slow, generating only one token per forward pass. We propose Parallel Token Prediction (PTP), a general-purpose framework for predicting multiple tokens in a single model call. PTP moves the source of randomness from post-hoc sampling to random input variables, making future tokens deterministic functions of those inputs and thus jointly predictable in a single forward pass. We prove that a single PTP call can represent arbitrary dependencies between tokens. PTP is trained by distilling an existing model or through inverse autoregressive training without a teacher. Experimentally, PTP achieves a 2.4$\times$ speedup on a diverse-task speculative decoding benchmark. We provide code and checkpoints at https://github.com/mandt-lab/ptp.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 14536
Loading