Compositional Generalization and Decomposition in Neural Program SynthesisDownload PDF

Published: 26 Mar 2022, Last Modified: 05 May 2023DL4C 2022Readers: Everyone
Keywords: Program Synthesis, Generalization, Compositional Generalization, Decomposition
TL;DR: We study compositional generalization in the context of program synthesis, proposing benchmark datasets to measure this, and introducing novel decomposition-based Transformer variations that achieve higher compositional generality.
Abstract: When writing programs, people have the ability to tackle a new complex task by decomposing it into smaller and more familiar subtasks. While it is difficult to measure whether neural program synthesis methods have similar capabilities, what we can measure is whether they compositionally generalize, that is, whether a model that has been trained on the simpler subtasks is subsequently able to solve more complex tasks. In this paper, we focus on measuring the ability of learned program synthesizers to compositionally generalize. We first characterize several different axes along which program synthesis methods would be desired to generalize, e.g., length generalization, or the ability to combine known subroutines in new ways that do not occur in the training data. Based on this characterization, we introduce a benchmark suite of tasks to assess these abilities based on two popular existing datasets, SCAN and RobustFill. Finally, we make first attempts to improve the compositional generalization ability of Transformer models along these axes through novel attention mechanisms that draw inspiration from a human-like decomposition strategy. Empirically, we find our modified Transformer models generally perform better than natural baselines, but the tasks remain challenging.
1 Reply

Loading