Abstract: Cross-lingual summarization (XLS) addresses summary generation in a language different from that of the input document (e.g., English to Spanish). In the present day, the predominant approach to this task is to take a performing, pretrained multilingual language model (LM) and fine-tune it for XLS on the language pairs of interest. However, the scarcity of fine-tuning resources makes this approach non-viable in some cases. For this reason, in this paper we propose revisiting the summarize-and-translate pipeline, where the summarization and translation tasks are performed in a sequence. This approach allows reusing the many, publicly-available resources for monolingual summarization and translation, obtaining a very competitive zero-shot performance. In addition, the proposed pipeline is completely differentiable end-to-end, allowing it to take advantage of few-shot fine-tuning, where available. Experiments over two contemporary and widely adopted XLS datasets (CrossSum and WikiLingua) have shown the remarkable zero-shot performance of the proposed approach, and also its strong few-shot performance compared to an equivalent multilingual LM baseline, where the proposed approach has been able to outperform the baseline in many languages with only 0.1X shots.
Paper Type: long
Research Area: Summarization
Languages Studied: English, Spanish, French, Arabic, Ukrainian, Azerbaijani, Bengali, Russian, Chinese, Turkish, Thai, Indonesian
0 Replies
Loading