Keywords: induction, LLM, active learning
TL;DR: We give a model of how to infer hidden natural language rules by doing experiments using probabilistic reasoning with language models.
Abstract: We give a model of how to infer natural language rules by doing experiments. The
model integrates Large Language Models (LLMs) with Monte Carlo algorithms for
probabilistic inference, interleaving online belief updates with experiment design
under information-theoretic criteria. We conduct a human-model comparison on a
Zendo-style task, finding that a critical ingredient for modeling the human data is to
assume that humans also consider fuzzy, probabilistic rules, in addition to assuming
that humans perform approximately-Bayesian belief updates. We also compare
with recent algorithms for using LLMs to generate and revise hypotheses, finding
that our online inference method yields higher accuracy at recovering the true
underlying rule, and provides better support for designing optimal experiments.
Primary Area: Neuroscience and cognitive science (neural coding, brain-computer interfaces)
Flagged For Ethics Review: true
Submission Number: 11825
Loading