CLER: Cross-task Learning with Expert Representation to Generalize Reading and Understanding

Published: 2019, Last Modified: 29 Jul 2025MRQA@EMNLP 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper describes our model for the reading comprehension task of the MRQA shared task. We propose CLER, which stands for Cross-task Learning with Expert Representation for the generalization of reading and understanding. To generalize its capabilities, the proposed model is composed of three key ideas: multi-task learning, mixture of experts, and ensemble. In-domain datasets are used to train and validate our model, and other out-of-domain datasets are used to validate the generalization of our model’s performances. In a submission run result, the proposed model achieved an average F1 score of 66.1 % in the out-of-domain setting, which is a 4.3 percentage point improvement over the official BERT baseline model.
Loading