What Layers When: Learning to Skip Compute in LLMs with Residual Gates

ICLR 2026 Conference Submission16551 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: decoder-only language models, large language models, layer skipping, adaptive compute, efficient inference, LLM
TL;DR: We add learnable gates to the exit points of Attention and MLP modules in GPT-style models, compressing their output so that we are then able to use them to assess token importance and thus as a skipping mechanism.
Abstract: We introduce GateSkip, a simple residual-stream gating mechanism that enables token-wise layer skipping in decoder-only LMs. Each Attention/MLP branch is equipped with a sigmoid-linear gate that compresses the branch’s output before it re-enters the residual stream. During inference we rank tokens by the gate and skip low-importance ones using a per-layer budget. While early-exit or router-based Mixture-of-Depths models are known to be unstable and need extensive retraining, our smooth, differentiable gates fine-tune stably on top of pretrained models. On long-form reasoning, we save up to 15% compute while retaining >90% of baseline accuracy. On instruction-tuned models we see accuracy gains at full compute and match baseline quality near 50% savings. The learned gates give insight into transformer information flow (e.g., BOS tokens act as anchors), and the method combines easily with quantization, pruning, and self-speculative decoding.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 16551
Loading