Track: tiny / short paper (up to 4 pages)
Keywords: knowledge distillation; transformer models; computational efficiency
TL;DR: Our inhibitor transformer, using attention based on Manhattan distance and ReLU activation for computational efficiency, achieves comparable NLP benchmark performance to conventional dot-product transformers when trained via knowledge distillation.
Abstract: This work explores optimizing transformer-based language models by integrating model compression techniques with inhibitor attention, a novel alternative attention mechanism. Inhibitor attention employs Manhattan distances and ReLU activations instead of the matrix multiplications and softmax activation of the conventional scaled dot-product attention. This shift offers potential computational and energy savings while maintaining model effectiveness. We propose further adjustments to improve the inhibitor mechanism's training efficiency and evaluate its performance on the DistilBERT architecture.
Our knowledge distillation experiments indicate that the modified inhibitor transformer model can achieve competitive performance on standard NLP benchmarks, including General Language Understanding Evaluation (GLUE) and sentiment analysis tasks.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 40
Loading