Keywords: Large Language Models, Abstraction, Procedural Knowledge
TL;DR: LLMs can learn to execute procedures that are described symbolically in their training data, but only with specific finetuning curricula.
Abstract: Large language models (LLMs) are typically trained to acquire behaviours from demonstrations or experience, yet much of their training data is declarative: instructions, rules, and descriptions that specify behaviours without showing how to execute them. We introduce **Programming by Backprop (PBB)**: a training regime that enables LLMs to acquire *procedural* knowledge (i.e., reusable behaviours) from *declarative* instructions encountered during training. With PBB, instructions in training data provide an opportunity to "program" specific behaviours into model weights. The core principle underpinning PBB is the separation of learning how instructions map to behaviour from internalising new instructions. We devise two distinct PBB curricula that leverage this principle. Through controlled experiments across two domains (algorithmic execution from Python source code and text generation from context-free grammars), we demonstrate the benefit of these curricula over training on a homogeneous data mixture. Crucially, PBB is highly sample efficient, with *a single instruction substituting for up to 100 execution examples*. Though execution of instructions in training data remains less reliable than when instructions are given in-context, our results demonstrate that procedural knowledge can be noisily `programmed' into LLMs through PBB, with important implications for data curation and safety.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 24916
Loading