Unleashing the Multilingual Encoder Potential: Boosting Zero-Shot Performance via Probability Calibration

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Multilinguality and Linguistic Diversity
Submission Track 2: Efficient Methods for NLP
Keywords: prompt learning, multilingual encoders, calibration
TL;DR: We introduce several calibration methods to prompt learning of multilingual encoders and improve the zero-/few-shot performance.
Abstract: Pretrained multilingual encoder models can directly perform zero-shot multilingual tasks or linguistic probing by reformulating the input examples into cloze-style prompts. This is accomplished by predicting the probabilities of the label words at the masked token position, without requiring any updates to the model parameters. However, the performance of this method is limited by the model's bias toward predicting label words which frequently occurred during the pretraining. These words typically receive high probabilities. To address this issue, we combine the models with calibration techniques which modify the probabilities of label words predicted by the models. We first validate the effectiveness of a proposed simple calibration method together with other existing techniques on monolingual encoders in both zero- and few-shot scenarios. We subsequently employ these calibration techniques on multilingual encoders, resulting in substantial performance improvements across a wide range of tasks.
Submission Number: 4118
Loading