CrossSum: Beyond English-Centric Cross-Lingual Summarization for 1,500+ Language PairsDownload PDF

Anonymous

16 Oct 2022 (modified: 05 May 2023)ACL ARR 2022 October Blind SubmissionReaders: Everyone
Keywords: Cross-lingual summarization, Natural Language Generation
Abstract: We present CrossSum, a large-scale cross-lingual summarization dataset comprising 1.68 million article-summary samples in 1,500+ language pairs. We create CrossSum by aligning identical articles written in different languages via cross-lingual retrieval from a multilingual summarization dataset and perform a controlled human evaluation to validate its quality. We propose a multistage data sampling algorithm to effectively train a cross-lingual summarization model capable of summarizing an article in any target language. We also introduce LaSE, an embedding-based metric for automatically evaluating model-generated summaries. LaSE is strongly correlated with ROUGE and, unlike ROUGE, can be reliably measured even in the absence of references in the target language. Performance on ROUGE and LaSE indicate that pretrained models fine-tuned on CrossSum consistently outperform baseline models. To the best of our knowledge, CrossSum is the largest cross-lingual summarization dataset and the first-ever that is not centered around English. We will release the dataset, alignment and training scripts, and the models to spur future research on cross-lingual summarization.
Paper Type: long
Research Area: Resources and Evaluation
0 Replies

Loading