Abstract: Multi-document summarization, a complex task in natural language processing, requires synthesizing information from multiple texts. Despite the focus on pre-training in recent research, the role of fine-tuning has been underexplored. We introduce SynText, a model that builds on the PRIMERA model for multi-document summarization through momentum calibration fine-tuning. Our results show that SynText surpasses the current state-of-the-art on the MultiNews dataset across all major ROUGE metrics. This work highlights the importance of not taking fine-tuning strategies for granted.
Paper Type: short
Research Area: Summarization
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English
0 Replies
Loading