Y-NQ: English-Yorùbá Evaluation dataset for Open-Book Reading Comprehension with Open-Ended Questions
Abstract: The purpose of this work is to share an English-Yorùbá evaluation dataset for open-book reading comprehension with open-ended questions to assess the performance of models both in a high- and a low-resource language. The dataset contains 358 questions and answers on 338 English documents and 208 Yorùbá documents.
Experiments show a consistent disparity in performance between the two languages, with Yorùbá falling behind English for automatic metrics even if documents are much shorter for this language.
For a small set of documents with comparable length, performance of Yorùbá drops by x2.5 times and this comparison is validated with human evaluation.
When analyzing performance by length, we observe that Yorùbá decreases performance dramatically for documents that reach 1500 words while English performance is barely affected at that length. Our dataset opens the door to showcasing if English LLM reading comprehension capabilities extend to Yorùbá, which for the evaluated LLMs is not the case.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: Yorùbá resources, Open-Book, Open-Ended, Reading Comprehension
Contribution Types: Data resources
Languages Studied: Yorùbá, English
Submission Number: 1029
Loading