MTG: A Benchmark Suite for Multilingual Text GenerationDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=7nZKJamevdQ
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation. It is the first-proposed multilingual multiway text generation dataset with the largest human-annotated data (400k). It includes four generation tasks (story generation, question generation, title generation and text summarization) across five languages (English, German, French, Spanish and Chinese). The multiway setup enables testing knowledge transfer capabilities for a model across languages and tasks. Using MTG, we train and analyze several popular multilingual generation models from different aspects. Our benchmark suite fosters model performance enhancement with more human-annotated parallel data. It provides comprehensive evaluations with diverse generation scenarios. Code and data are available at \url{https://github.com/zide05/MTG}.
Presentation Mode: This paper will be presented in person in Seattle
Virtual Presentation Timezone: UTC-8
Copyright Consent Signature (type Name Or NA If Not Transferrable): Yiran Chen
Copyright Consent Name And Address: Bytedance, No. 48, Zhichun Road, Haidian District, Beijing
0 Replies

Loading