Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 oralEveryoneRevisionsBibTeX
Keywords: Cognitive science, Bayesian, Language model, Induction, Psychology, Reasoning
TL;DR: We build a model of human concept learning by integrating language models with probabilistic reasoning
Abstract: A core tension in models of concept learning is that the model must carefully balance the tractability of inference against the expressivity of the hypothesis class. Humans, however, can efficiently learn a broad range of concepts. We introduce a model of inductive learning that seeks to be human-like in that sense. It implements a Bayesian reasoning process where a language model first proposes candidate hypotheses expressed in natural language, which are then re-weighed by a prior and a likelihood. By estimating the prior from human data, we can predict human judgments on learning problems involving numbers and sets, spanning concepts that are generative, discriminative, propositional, and higher-order.
Supplementary Material: pdf
Submission Number: 8750
Loading