Fake Sentence Detection as a Training Task for Sentence EncodingDownload PDF

Anonymous

23 May 2018 (modified: 23 May 2018)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: Sentence encoders are typically trained on language modeling tasks which enable them to use large unlabeled datasets. While these models achieve state-of-the-art results on many sentence-level tasks, they are difficult to train with long training cycles. We introduce fake sentence detection as a new training task for learning sentence encodings. We automatically generate fake sentences by corrupting some original sentence and train the encoders to produce representations that are effective at detecting fake sentences. This binary classification task allows for efficient training and forces the encoder to learn the distinctions introduced by a small edit to sentences. We train a basic BiLSTM encoder to produce sentence representations and find that it outperforms a strong sentence encoding model trained on language modeling tasks, while also training much faster on smaller amount of data (20 hours instead of weeks). Further analysis shows the learned representations capture many syntactic and semantic properties expected from good sentence representations.
Keywords: Fake Sentence Detection, Sentence Encoder, Unsupervised Learning
0 Replies

Loading