Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of TokensDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=5zxEabUyW9K
Paper Type: Short paper (up to four pages of content + unlimited references and appendices)
Abstract: Standard pretrained language models operate on sequences of subword tokens without direct access to the characters that compose each token’s string representation. We probe the embedding layer of pretrained language models and show that models learn the internal character composition of whole word and subword tokens to a surprising extent, without ever seeing the characters coupled with the tokens. Our results show that the embedding layers of RoBERTa and GPT2 each hold enough information to accurately spell up to a third of the vocabulary and reach high character ngram overlap across all token types. We further test whether enriching subword models with character information can improve language modeling, and observe that this method has a near-identical learning curve as training without spelling-based enrichment. Overall, our results suggest that language modeling objectives incentivize the model to implicitly learn some notion of spelling, and that explicitly teaching the model how to spell does not appear to enhance its performance on such tasks.
Presentation Mode: This paper will be presented in person in Seattle
Virtual Presentation Timezone: UTC+2
Copyright Consent Signature (type Name Or NA If Not Transferrable): Itay Itzhak
Copyright Consent Name And Address: Tel Aviv University, Ramat Aviv 69978, Israel
0 Replies

Loading