Grammatical Analysis of Pretrained Sentence Encoders with Acceptability JudgmentsDownload PDF

Anonymous

11 Dec 2018 (modified: 11 Dec 2018)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: Recent pretrained sentence encoders achieve state of the art results on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures? We introduce a grammatically annotated development set for the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2018), which we use to investigate the grammatical knowledge of three pretrained encoders, including the popular OpenAI Transformer (Radford et al., 2018) and BERT (Devlin et al., 2018). We fine-tune these encoders to do acceptability classification over CoLA and compare the models’ performance on the annotated analysis set. Some phenomena, e.g. modification by adjuncts, are easy to learn for all models, while others, e.g. long-distance movement, are learned effectively only by models with strong overall performance, and others still, e.g. morphological agreement, are hardly learned by any model.
Keywords: acceptability judgments, sentence embeddings, neural network, syntax
TL;DR: We investigate the implicit syntactic knowledge of sentence embeddings using a new analysis set of grammatically annotated sentences with acceptability judgments.
0 Replies

Loading