Tricks for Training Sparse Translation ModelsDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=1kJ4u_rolHr
Paper Type: Short paper (up to four pages of content + unlimited references and appendices)
Abstract: Multi-task learning with an unbalanced data distribution skews model learning towards high resource tasks, especially when model capacity is fixed and fully shared across all tasks. Sparse scaling architectures, such as BASELayers, provide flexible mechanisms for different tasks to have a variable number of parameters, which can be useful to counterbalance skewed data distributions. We find that that sparse architectures for multilingual machine translation can perform poorly out of the box and propose two straightforward techniques to mitigate this — a temperature heating mechanism and dense pre-training. Overall, these methods improve performance on two multilingual translation benchmarks compared to standard BASELayers and Dense scaling baselines, and in combination, more than 2x model convergence speed.
Presentation Mode: This paper will be presented virtually
Virtual Presentation Timezone: UTC-5
Copyright Consent Signature (type Name Or NA If Not Transferrable): Dheeru Dua
Copyright Consent Name And Address: University of California, Irvine
0 Replies

Loading