ElitePLM: An Empirical Study on General Language Ability Evaluation of Pretrained Language ModelsDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=vIsNdxqCaoa
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Nowadays, pretrained language models (PLMs) have dominated the majority of NLP tasks. While, little research has been conducted on systematically evaluating the language abilities of PLMs. In this paper, we present a large-scale empirical study on general language ability evaluation of PLMs (ElitePLM). In our study, we design four evaluation dimensions, \ie memory, comprehension, reasoning, and composition, to measure ten widely-used PLMs within five categories. Our empirical results demonstrate that: (1) PLMs with varying training objectives and strategies are good at different ability tests; (2) fine-tuning PLMs in downstream tasks is usually sensitive to the data size and distribution; (3) PLMs have excellent transferability between similar tasks. Moreover, the prediction results of PLMs in our experiments are released as an open resource for more deep and detailed analysis on the language abilities of PLMs. This paper can guide the future work to select, apply, and design PLMs for specific tasks. We have made all the details of experiments publicly available at~\url{https://github.com/RUCAIBox/ElitePLM}.
Presentation Mode: This paper will be presented virtually
Virtual Presentation Timezone: UTC-4
Copyright Consent Signature (type Name Or NA If Not Transferrable): Junyi Li
Copyright Consent Name And Address: Renmin University of China (Beijing, China)
0 Replies

Loading