Improving Sequence-to-Sequence Learning via Optimal TransportDownload PDF

Published: 21 Dec 2018, Last Modified: 03 Apr 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Sequence-to-sequence models are commonly trained via maximum likelihood estimation (MLE). However, standard MLE training considers a word-level objective, predicting the next word given the previous ground-truth partial sentence. This procedure focuses on modeling local syntactic patterns, and may fail to capture long-range semantic structure. We present a novel solution to alleviate these issues. Our approach imposes global sequence-level guidance via new supervision based on optimal transport, enabling the overall characterization and preservation of semantic features. We further show that this method can be understood as a Wasserstein gradient flow trying to match our model to the ground truth sequence distribution. Extensive experiments are conducted to validate the utility of the proposed approach, showing consistent improvements over a wide variety of NLP tasks, including machine translation, abstractive text summarization, and image captioning.
Keywords: NLP, optimal transport, sequence to sequence, natural language processing
Data: [MS COCO](https://paperswithcode.com/dataset/coco)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1901.06283/code)
11 Replies

Loading