On the Ability of Self-Attention Networks to Recognize Counter LanguagesDownload PDF

12 Jul 2020 (modified: 21 Sep 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
  • Keywords: Transformers, counter languages, computational power, regular languages
  • Abstract: Transformers have supplanted recurrent models in a large number of NLP tasks. However, the differences in their abilities to model different syntactic properties remain largely unknown. Past works suggest that LSTMs generalize very well on regular languages and have close connections with counter languages. In this work, we systematically study the ability of Transformers to model such languages as well as the role of its individual components in doing so. We first provide a construction of Transformers for a subclass of counter languages, including well-studied languages such as n-ary Boolean Expressions, Dyck-1, and its generalizations. In experiments, we find that Transformers do well on this subclass, and their learned mechanism strongly correlates with our construction. Perhaps surprisingly, in contrast to LSTMs, Transformers do well only on a subset of regular languages with degrading performance as we make languages more complex according to a well-known measure of complexity. Our analysis also provides insights on the role of self-attention mechanism in modeling certain behavior and the influence of positional encoding schemes on the learning and generalization ability of the model.
0 Replies