Keywords: small language models, pretraining
Abstract: Prior work has found that training very small language models (SLMs) on synthetic children's stories allows them to generate coherent text, comparable to much larger models. These stories are claimed to encompass the vocabulary and factual knowledge base of a 3-4-year-old child, capturing the ``essence of natural language."
Because of these claims, it is tempting to attribute the findings to the high readability (i.e., simple language) of children's stories, drawing a parallel to how children learn language.
Is the human concept of readability relevant in the context of language model training, or are these findings better explained by other properties of the data?
In this study, we investigate this by first validating several automatic readability measures. We then create synthetic corpora with varying levels of readability and assess the coherence of text generated by SLMs trained on these corpora.
We find that training on high readability text is not a prerequisite for coherent SLMs. Specifically, SLMs trained on data with substantially more complex language also exhibit the same abilities as those trained on simple language. Moreover, training on simple language does not lead to the earlier development of coherence during training.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13303
Loading