Keywords: Program induction, compositional reasoning, lattice theory, dual-process reasoning, abstraction discovery
TL;DR: This paper introduces the Program Lattice Transformer (PLT), a framework that models compositional reasoning as a lattice problem, enabling both fast approximate inference and stepwise deliberate reasoning.
Abstract: Compositional reasoning involves solving new problems by systematically combining basic primitives into structured transformations through three essential capabilities: learning compositional structure, inferring a program using trained compositional structure, and discovering refined primitives via abstraction of compositional structure. Existing approaches either lack explicit decomposition mechanisms for handling complex compositions or rely on hand-crafted primitives that limit adaptability. To address these limitations, we propose the Program Lattice Transformer (PLT) that learns compositional transformations with a structured latent program space. PLT preserves compositional structure by training an encoder where program effects and their compositions correspond to integer linear combinations of program bases, forming a discrete program lattice that captures the geometric structure of compositional reasoning. Program induction then reduces to solving a Closest Vector Problem (CVP) in this lattice, enabling principled inference through two complementary modes: fast System-1 reasoning via solving CVP to infer a composed program and deliberate System-2 reasoning through stepwise lattice walks with intermediate verification. The framework naturally supports abstraction discovery through lattice reduction, which refines primitive bases to improve efficiency and uncover more fundamental components. This work connects neural and symbolic reasoning by providing a mathematically principled framework for learning and inference in compositional domains.
Submission Number: 158
Loading