Degrees of Freedom for Linear Attention: Distilling Softmax Attention with Optimal Feature Efficiency

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Degrees of Freedom, Linear Attention
Abstract: Linear attention has attracted interest as a computationally efficient approximation to softmax attention, especially for long sequences. Recent studies has explored distilling softmax attention in pre-trained Transformers into linear attention. However, a critical challenge remains: *how to choose the feature dimension that governs the approximation quality*. Existing methods fix this dimension uniformly across all attention layers, overlooking the diverse roles and complexities of them. In this paper, we propose a principled method to automatically determine the feature dimension in linear attention using the concept of statistical *degrees of freedom*, which represent the effective dimensionality of the inputs. We provide a theoretical bound on the approximation error and show that the dimension chosen by our method achieves smaller errors under a fixed computational budget. Furthermore, we introduce an efficient layerwise training strategy to learn nonlinear features tailored to each layer. Experiments on multiple pre-trained transformers demonstrate that our method improves the performance of distilled models compared to baselines without increasing the inference cost. Our findings also provide insight into how the complexity of the attention mechanism evolves across layers.
Supplementary Material: zip
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 8691
Loading