Think Big, Teach Small: Do Language Models Distil Occam’s Razor?Download PDF

21 May 2021, 20:44 (modified: 26 Oct 2021, 16:16)NeurIPS 2021 PosterReaders: Everyone
Keywords: Humans and AI, Cognitive Systems, Explainability, Language Models, Inductive Programming, Occam's razor, Machine Teaching
TL;DR: We analyse experimentally whether language models distil Occam's razor in a few-shot inference setting designed through machine teaching, comparing results against humans and inductive programming systems
Abstract: Large language models have recently shown a remarkable ability for few-shot learning, including patterns of algorithmic nature. However, it is still an open question to determine what kind of patterns these models can capture and how many examples they need in their prompts. We frame this question as a teaching problem with strong priors, and study whether language models can identify simple algorithmic concepts from small witness sets. In particular, we explore how several GPT architectures, program induction systems and humans perform in terms of the complexity of the concept and the number of additional examples, and how much their behaviour differs. This first joint analysis of language models and machine teaching can address key questions for artificial intelligence and machine learning, such as whether some strong priors, and Occam’s razor in particular, can be distilled from data, making learning from a few examples possible.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/gonzalojaimovitch/think-big-teach-small
26 Replies

Loading