Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training ApproachDownload PDFOpen Website

2021 (modified: 23 Sept 2021)NAACL-HLT 2021Readers: Everyone
Abstract: Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, Chao Zhang. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021.
0 Replies

Loading