Strided Transformers for Partially-Parallelized Inference

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: transformers, auto-regressive, inference
TL;DR: Strided auto-regressive dependencies for partially-parallelized inference
Abstract: Auto-regressive large language models have dramatically improved performance in natural language generation tasks. Popular architectures such as the transformer have enabled parallel training across tokens and scaled to large corpora of datasets. Generation--however--remains a fundamentally serial task where a token must be fully predicted before processing of the next token begins. In this work, we propose a framework for partially-parallelized large model inference by striding autoregressive dependencies between model layers, yielding strategies to improve latency in either memory or compute bound workflows, while preserving fully parallel training. The associated models require a simple modification in training by rolling representations along the sequence axes and create a favorable setup in inference with only minor degredation in accuracy.
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3489
Loading