What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages
Abstract: What can large language models learn? By definition, language models (LM) are distributions over strings. Therefore, an intuitive way of addressing the above question is to formalize it as a matter of learnability of classes of distributions over strings. While prior work in this direction focused on assessing the theoretical limits, we seek to understand the empirical learnability. Unlike prior empirical work, we evaluate LMs on their home ground---learning probability distributions over strings---rather than as classifiers of formal languages. In particular, we investigate the learnability of finite-state LMs (FSLMs). We first theoretically quantify the minimal representation size of a neural LM necessary for learning an FSLM in terms of its rank, which corresponds to the size of linear space spanned by the logits of its conditional distributions. We then empirically test the learnability of FSLMs and find that the rank is a strong predictor of learnability for both Transformers and RNNs, but the significance of other properties of the FSLM differs between Transformers and RNNs.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, Theory
Languages Studied: Non-human Formal Languages
0 Replies
Loading