Learning compositional programs with arguments and samplingDownload PDF

Published: 23 Oct 2021, Last Modified: 08 Sept 2024AIPLANSReaders: Everyone
Keywords: program synthesis, neural program synthesis, monte carlo tree search, deep learning, reinforcement learning
TL;DR: This work presents some initial results to enrich search-based program synthesis to generate more human-like programs with additional features of high-level programming languages (e.g., function arguments) starting from input/output examples.
Abstract: One of the most challenging goals in designing intelligent systems is empowering them with the ability to synthesize programs from data. Namely, given specific requirements in the form of input/output pairs, the goal is to train a machine learning model to discover a program that satisfies those requirements. A recent class of methods exploits combinatorial search procedures and deep learning to learn compositional programs. However, they usually generate only toy programs using a domain-specific language that does not provide any high-level feature, such as function arguments, which reduces their applicability in real-world settings. We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments. This improvement will enable us to move closer to real computer programs. We showcase the potential of our approach by learning the Quicksort algorithm, showing how the ability to deal with arguments is crucial for learning and generalization.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/learning-compositional-programs-with/code)
1 Reply

Loading