Abstract: Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet
for many under-represented languages (ULs)—
languages for which NLP research is particularly far behind in meeting user needs—it is
feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a
benchmark defined by: its focus on the scarcedata scenario rather than zero-shot; its focus on
user-centric tasks—tasks with broad adoption
by speakers of high-resource languages; and its
focus on under-represented languages where
this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of
language models across 88 under-represented
languages over 9 key user-centric technologies
including ASR, OCR, MT, and information access tasks that are of general utility. We create
new datasets for OCR, autocomplete, semantic
parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREMEUP provides methodology for evaluating many
modeling scenarios including text-only, multimodal (vision, audio, and text), supervised parameter tuning, and in-context learning.1 We
evaluate commonly used models on the benchmark. We release all code and scripts to train
and evaluate models.2\
0 Replies
Loading