ELT: Elastic Looped Transformers for Visual Generation

Published: 27 Apr 2026, Last Modified: 27 Apr 2026EDGE PosterEveryoneRevisionsCC BY 4.0
Keywords: Efficient Visual Generation, Recurrent Transformers, Parameter Efficiency, Elastic Inference, Looping
Paper Track: Extended Abstract (non-archival)
TL;DR: Elastic Looped transformers for efficient and scalable visual generation enabling any-time/elastic inference with competitive generation quality and high throughput.
Abstract: We introduce Elastic Looped Transformers (ELT), a highly parameter-efficient class of visual generative models based on a recurrent transformer architecture. While conventional generative models rely on deep stacks of unique transformer layers, our approach employs iterative, weight-shared transformer blocks to drastically reduce parameter counts while maintaining high synthesis quality. To effectively train these models for image and video generation, we propose the idea of Intra-Loop Self Distillation (ILSD), where student configurations (intermediate loops) are distilled from the teacher configuration (maximum training loops) to ensure consistency across the model's depth in a single training step. Our framework yields a family of elastic models from a single training run, enabling Any-Time inference capability with dynamic trade-offs between computational cost and generation quality, with the same parameter count. ELT significantly shifts the efficiency frontier for visual synthesis. With $4\times$ reduction in parameter count under iso-inference-compute settings, ELT achieves a competitive FID of $2.0$ on class-conditional ImageNet $256 \times 256$ and FVD of $72.8$ on class-conditional UCF-101.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 24
Loading