Dive into the Chasm: Probing the Gap between In- and Cross-Topic GeneralizationDownload PDF

Anonymous

17 Jun 2023ACL ARR 2023 June Blind SubmissionReaders: Everyone
Abstract: Pre-trained language models (PLMs) excel for In-Topic setups where training and evaluation data originate from the same topics.Simultaneously, they struggle with Cross-Topic setups where we withhold instances from distinct topics for evaluations.In this paper, we aim to understand better how and why such generalization gaps emerge by probing various PLMs for different aspects. We show for the first time that these generalization gaps and the fragility of token-level interventions notably vary across PLMs.Further, by evaluating large language models (LLMs), we show how our analysis scales to bigger models.Overall, we observed diverse pre-training objectives and architectural regularization contribute to more robust PLMs and mitigate generalization gaps. Our research attributes to a better understanding of PLMs, selecting appropriate ones, or building more robust ones.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
0 Replies

Loading