TL;DR: We can stochastically run an evaluator backwards to learn fast programs corresponding to data
Abstract: Bayesian program learning provides a general approach to human-level concept learning in artificial intelligence. However, most priors over powerful programming languages make searching for a high-scoring program intractable, and therefore cognitively unrealistic. We hypothesize that an efficient learner searches programs which efficiently generate a likelihood by running to completion, and model this hypothesis with an ad-hoc proposal for programs. Our proposal works backwards from observations to find programs which quickly generate similar results.
Keywords: bayesian program learning, program induction, probabilistic program semantics
3 Replies
Loading