BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input RepresentationDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=CtuhD9Oew9S
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Data augmentations (DA) are the cores to achieving robust sequence-to-sequence learning on various NLP tasks. However, most of the DA approaches force the decoder to make predictions conditioned on the perturbed input representation, which we argue may make the sequence-to-sequence learning sub-optimal. In response to this problem, we propose a framework-level robust sequence-to-sequence learning approach, namely BLISS, via self-supervised input representation, which has the great potential to complement the data-level augmentation approaches. The core idea is to supervise the sequence-to-sequence framework with both the supervised (``input$\rightarrow$output'') and self-supervised (``perturbed input$\rightarrow$input'') information. Experimental results show that our BLISS outperforms the vanilla Transformer and five contrastive baselines on several NLP benchmarks, including machine translation, grammatical error correction and text summarization. Extensive analyses reveal that BLISS learns robust representations and rich linguistic knowledge, confirming our claim. Source code will be released upon publication.
0 Replies

Loading