DEMix Layers: Disentangling Domains for Modular Language ModelingDownload PDF

Anonymous

16 Oct 2021 (modified: 05 May 2023)ACL ARR 2021 October Blind SubmissionReaders: Everyone
Abstract: We introduce a new domain expert mixture (DEMix) layer that enables conditioning a language model (LM) on the domain of the input text. A DEMix layer is a collection of expert feedforward networks, each specialized to a domain, that makes the LM modular: experts can be mixed, added, or removed after initial training. Extensive experiments with autoregressive transformer LMs (up to 1.3B parameters) show that DEMix layers reduce perplexity, increase training efficiency, and enable rapid adaptation. Mixing experts during inference, using a parameter-free weighted ensemble, enables better generalization to heterogeneous or unseen domains. Adding experts incorporates new domains without forgetting older ones, and removing experts restricts access to unwanted domains without additional training. Overall, these results demonstrate benefits of explicitly conditioning on textual domains during language modeling.
0 Replies

Loading