Abstract: High-order numerical methods enhance Transformer performance in tasks like NLP and CV, but introduce a performance-efficiency trade-off due to increased computational overhead. Our analysis reveals that conventional efficiency techniques, such as distillation, can be detrimental to the performance of these models, exemplified by PCformer. To explore more optimizable ODE-based Transformer architectures, we propose the Iterative Implicit Euler Transformer (IIET), which simplifies high-order methods using an iterative implicit Euler approach. This simplification not only leads to superior performance but also facilitates model compression compared to PCformer. To enhance inference efficiency, we introduce Iteration Influence-Aware Distillation (IIAD). Through a continued training phase, IIAD eliminates non-essential iterations, reducing IIET's inference computational overhead by over 60% while maintaining 99.4% task performance accuracy. On lm-evaluation-harness, IIET demonstrates a 2.65% improvement over vanilla Transformers and a 0.8% gain over PCformer in average accuracy. The efficient variant E-IIET achieves a 1.83x speedup and a performance gain exceeding 0.5% compared to PCformer.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: ordinary differential equations,language modeling
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 8365
Loading