A Constrained Optimization Perspective of Unrolled Transformers

ICLR 2026 Conference Submission19336 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: constrained learning, unrolled neural networks, transformers
TL;DR: Transformers trained with monotonic descent constraints are more robust to out-of-distribution perturbations.
Abstract: We introduce a constrained optimization framework for training transformers that behave like optimization descent algorithms. Specifically, we enforce layerwise descent constraints on the objective function and replace standard empirical risk minimization (ERM) with a primal-dual training scheme. This approach yields models whose intermediate representations decrease the loss monotonically in expectation across layers. We apply our method to both unrolled transformer architectures and conventional pretrained transformers on tasks of video denoising and text classification. Across these settings, we observe that constrained transformers achieve stronger robustness to perturbations and maintain higher out-of-distribution generalization, while preserving competitive in-distribution performance.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 19336
Loading