Y-NQ: English-Yorùbá Evaluation dataset for Open-Book Reading Comprehension and Text Generation

ACL ARR 2024 December Submission968 Authors

15 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The purpose of this work is to share an English-Yorùbá evaluation dataset for open-book reading comprehension and text generation to assess the performance of models both in a high- and a low-resource language. The dataset contains 358 questions and answers on 338 English documents and 208 Yorùbá documents. The average document length is about10k words for English and 430 words for Yorùbá . Experiments show a consistent disparity in performance between the two languages, with Yoruba falling behind English for automatic metrics even if documents are much shorter for this language. For a small set of documents with comparable length, performance of Yorùbá } drops by $x$2.5 times. When analyzing performance by length, we observe that Yorùbá decreases performance dramatically for documents that reach 1500 words while English performance is barely affected at that length. Our dataset opens the door to showcasing if English LLM reading comprehension capabilities extend to Yorùbá , which for the evaluated LLMs is not the case.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: Yorùbá, Reading Comprehension
Contribution Types: Data resources
Languages Studied: Yorùbá, English
Submission Number: 968
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview