Entropy Meets Importance: A Unified Head Importance–Entropy Score for Stable and Efficient Transformer Pruning

ICLR 2026 Conference Submission15856 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Transformer Architecture, Attention Head Pruning, Model Stability, Attention Entropy
Abstract: Transformer-based models have achieved remarkable performance in NLP tasks. However, their structural characteristics—multiple layers and attention heads—introduce efficiency challenges in inference and deployment. To address these challenges, various pruning methods have recently been proposed. Notably, gradient-based methods using Head Importance Scores (HIS) have gained traction for interpretability, efficiency, and ability to identify redundant heads. However, HIS alone has limitations as it captures only the gradient-driven contribution, overlooking the diversity of attention patterns. To overcome these limitations, we introduce a novel pruning criterion, **HIES (Head Importance-Entropy Score)**, which integrates head importance scores with attention entropy, providing complementary evidence on per-head contribution. Empirically, HIES‐based pruning yields up to 15.2\% improvement in model quality and $2.04\times$ improvement in stability over HIS‐only methods, enabling substantial model compression without sacrificing either accuracy or stability. Code will be released upon publication.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 15856
Loading