Semi-Supervised Seq2seq Joint-Stochastic-Approximation Autoencoders With Applications to Semantic ParsingDownload PDFOpen Website

2020 (modified: 06 Nov 2022)IEEE Signal Process. Lett. 2020Readers: Everyone
Abstract: Developing Semi-Supervised Seq2Seq (S4) learning for sequence transduction tasks in natural language processing (NLP), e.g. semantic parsing, is challenging, since both the input and the output sequences are discrete. This discrete nature makes trouble for methods which need gradients either from the input space or from the output space. Recently, a new learning method called joint stochastic approximation is developed for unsupervised learning of fixed-dimensional autoencoders and theoretically avoids gradient propagation through discrete latent variables, which is suffered by Variational Auto-Encoders (VAEs). In this letter, we propose seq2seq Joint-stochastic-approximation AutoEncoders (JAEs) and apply them to S <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">4</sup> learning for NLP sequence transduction tasks. Further, we propose bi-directional JAEs (called bi-JAEs) to leverage not only unpaired input sequences (which is most commonly studied) but also unpaired output sequences. Experiments on two benchmarking datasets for semantic parsing show that JAEs consistently outperform VAEs in S4 learning and bi-JAEs yield further improvements.
0 Replies

Loading