Tensorized Embedding Layers for Efficient Model CompressionDownload PDF

25 Sept 2019 (modified: 23 Mar 2025)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Embedding layers compression, tensor networks, low-rank factorization
TL;DR: Embedding layers are factorized with Tensor Train decomposition to reduce their memory footprint.
Abstract: The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing. However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting. We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.
Code: https://github.com/tt-embedding/tt-embeddings
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/tensorized-embedding-layers-for-efficient/code)
Original Pdf: pdf
9 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview