CAVSS: A Commonsense Augmented Variational Sequence to Sequence Model for Language GenerationDownload PDF

Anonymous

03 Sept 2022 (modified: 05 May 2023)ACL ARR 2022 September Blind SubmissionReaders: Everyone
Abstract: Commonsense knowledge as external knowledge enhances the semantic understanding of the input sequences of the model and is of guidance to text generation models. In this paper, we propose a novel approach of incorporating commonsense knowledge for enhancing the performance of end-to-end text generation models. Firstly, given an input sequence and retrieving the relevant knowledge triples, the embedding of the commonsense knowledge and the context vector encoded in the encoder part are spliced for sampling, allowing the prior distribution to approximately fit the posterior distribution to achieve the selection of appropriate knowledge even without posterior information. Then an autoregres-sive transformation is applied to the sampling to prevent the problem of too slow fitting of simple Gaussian distribution, and a new learning objective is designed in the training phase to make this trans-formed distribution fit the posterior distribution. In addition, we perform variational operations on the decoding part of the attention mechanism to weaken the attention strength and prevent reconstruction from playing a decisive role in generation while ig-noring other modules. Experiments show that our proposed model can generate more fluent and sig-nificantly more diverse sentences, and the contribu-tions of each module to the model are analyzed, achieving satisfactory results.
Paper Type: long
0 Replies

Loading