Hash Layers For Large Sparse ModelsDownload PDF

21 May 2021, 20:43 (edited 21 Jan 2022)NeurIPS 2021 SpotlightReaders: Everyone
  • Keywords: large-scale, sparsity, Transformers, hashing, MoE
  • TL;DR: Proposes to use hashing to select model parameters per input for effective large, sparse Transformer models.
  • Abstract: We investigate the training of sparse layers that use different parameters for different inputs based on hashing in large Transformer models. Specifically, we modify the feedforward layer to hash to different sets of weights depending on the current token, over all tokens in the sequence. We show that this procedure either outperforms or is competitive with learning-to-route mixture-of-expert methods such as Switch Transformers and BASE Layers, while requiring no routing parameters or extra terms in the objective function such as a load balancing loss, and no sophisticated assignment algorithm. We study the performance of different hashing techniques, hash sizes and input features, and show that balanced and random hashes focused on the most local features work best, compared to either learning clusters or using longer-range context. We show our approach works well both on large language modeling and dialogue tasks, and on downstream fine-tuning tasks.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/facebookresearch/ParlAI/tree/main/projects/params_vs_compute
10 Replies