TaCL: Improving BERT Pre-training with Token-aware Contrastive LearningDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=ZomWTRoEiz0
Paper Type: Short paper (up to four pages of content + unlimited references and appendices)
Abstract: Masked language models (MLMs) such as BERT have revolutionized the field of Natural Language Understanding in the past few years. However, existing pre-trained MLMs often output an anisotropic distribution of token representations that occupies a narrow subset of the entire representation space. Such token representations are not ideal, especially for tasks that demand discriminative semantic meanings of distinct tokens. In this work, we propose TaCL (Token-aware Contrastive Learning), a novel continual pre-training approach that encourages BERT to learn an isotropic and discriminative distribution of token representations. TaCL is fully unsupervised and requires no additional data. We extensively test our approach on a wide range of English and Chinese benchmarks. The results show that TaCL brings consistent and notable improvements over the original BERT model. Furthermore, we conduct detailed analysis to reveal the merits and inner-workings of our approach.
Copyright Consent Signature (type Name Or NA If Not Transferrable): Yixuan Su
Copyright Consent Name And Address: Yixuan Su
Presentation Mode: This paper will be presented in person in Seattle
0 Replies

Loading