Learning Helpful Inductive Biases from Self-Supervised PretrainingDownload PDF

Anonymous

06 Jun 2020 (modified: 06 Jun 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: Large pretrained language models demonstrate strong, language-specific biases during fine-tuning that allow them to solve language tasks better than models without pretraining. We aim to characterize these biases, and identify the amount of pretraining that is necessary to acquire them. We introduce a new English language diagnostic set called MSGS (Mixed Signals Generalization Set) which contains two types of data: mixed data in which the labels are consistent with both a linguistic classification (e.g., Is the main verb in the progressive form?) and a superficial surface one (e.g., Does "the" precede "a"?); and unmixed data in which the labels align only with the linguistic feature. We fine-tune RoBERTa on mixed data (with and without small amounts of inoculating unmixed data) and test on unmixed data to see which feature it has bias in favor of. We pretrain RoBERTa from scratch on quantities of data ranging from 1M to 1B words and compare their performance on MSGS to the publicly available RoBERTa-Base. We find steady growth in linguistic bias with increased pretraining data. The models we test can usually represent the linguistic features, but they only learn to prefer to generalize based on these features with significant pretraining. In the absence of inoculating data, only RoBERTa-Base consistently demonstrates a linguistic bias with any regularity.
0 Replies

Loading