Keywords: Transformers, Sequence Modeling, Machine Translation, Language Modeling, Representation learning, Efficient Networks
Abstract: We introduce a deep and light-weight transformer, DeLighT, that delivers similar or better performance than standard transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both (1) within each Transformer block using the DeLighT transformation, a deep and light-weight transformation and (2) across blocks using block-wise scaling, that allows for shallower and narrower DeLighT blocks near the input and wider and deeper DeLighT blocks near the output. Overall, DeLighT networks are 2.5 to 4 times deeper than standard transformer models and yet have fewer parameters and operations. Experiments on benchmark machine translation and language modeling tasks show that DeLighT matches or improves the performance of baseline Transformers with 2 to 3 times fewer parameters on average.
One-sentence Summary: Deep and light-weight transformer that matches or improves the performance of baseline Transformers with 2 to 3 times fewer parameters on standard machine translation and language modeling tasks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) sacmehta/delight](https://github.com/sacmehta/delight) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=ujmgfuxSLrO)
Data: [WMT 2016](https://paperswithcode.com/dataset/wmt-2016), [WMT 2016 News](https://paperswithcode.com/dataset/wmt-2016-news), [WikiText-103](https://paperswithcode.com/dataset/wikitext-103), [WikiText-2](https://paperswithcode.com/dataset/wikitext-2)
36 Replies
Loading