SSP-CLT: Self-Supervised Prompting for Cross-Lingual Transfer to Low-Resource Languages using Large Language ModelsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: In-Context Learning (ICL) is a widely embraced paradigm for eliciting task-specific capabilities from large language models (LLMs). Present-day LLMs with ICL have shown exceptional performance on several English NLP tasks, but their utility on other languages is still underexplored. Our work investigates their effectiveness for NLP tasks in low-resource languages (LRLs), especially for cross-lingual transfer, where task-specific training data for one or more related languages is available. We propose Self-Supervised Prompting for Cross-Lingual Transfer (SSP-CLT), a novel approach for zero-shot cross-lingual transfer to LRLs. SSP-CLT works in two stages and has 2 variants. In first variant, in Stage I, for a given target test instance, exemplars are retrieved from source training data and included in the LLM prompt for ICL -- this obtains an initial labeling. Once all test data instances are labeled, Stage II repeats the whole process, but draws exemplars from Stage I labelings of other test datapoints in the target language. The second variant of SSP-CLT uses a fine-tuned model for stage 1 predictions, while stage 2 uses an Integer Linear Programming (ILP)-based exemplar selection that balances similarity, confidence and label coverage.Experiments on 3 tasks and 3 language families demonstrate that SSP-CLT strongly outperforms supervised baselines and also other prompting approaches.
Paper Type: long
Research Area: Multilinguality and Language Diversity
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: African, Germanic, Indigenous languages of Americas
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview