Improving QA Generalization by Concurrent Modeling of Multiple BiasesDownload PDF

Anonymous

11 Jun 2020 (modified: 11 Jun 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
  • Abstract: Existing NLP datasets contain various biases that models can easily exploit to achieve high performance on the corresponding evaluation sets. However, focusing on dataset-specific biases limits their ability to learn more generalizable knowledge about the task from more general patterns of the data. In this paper, we investigate the impact of debiasing methods for improving generalization and propose a general framework for improving the performance on both in-domain as well as unseen out-of-domain datasets by concurrent modeling of multiple biases in the training data. Our framework weights each example based on the biases it contains and the strength of those biases in the training data. It then uses these weights in the training objective so that the model relies less on examples with high bias weights. We extensively evaluate our framework on the task of question answering that contains training data from various domains with multiple biases of different strengths. We perform the evaluations in two different settings, in which the model is trained on a single domain or multiple domains simultaneously, and show its effectiveness in both settings compared to state-of-the-art debiasing methods. We will release our framework upon publication.
0 Replies

Loading