Representing Partial Programs with Blended Abstract SemanticsDownload PDF

Published: 02 Nov 2020, Last Modified: 05 May 2023NeurIPS 2020 CAP WorkshopReaders: Everyone
Keywords: program synthesis, representation learning, abstract interpretation, modular neural networks
TL;DR: We use a combination of concrete execution and learned neural semantics to represent partial programs, resulting in more accurate program synthesis.
Abstract: Synthesizing programs from examples requires searching over a vast, combinatorial space of possible programs. In this search process, a key challenge is representing the behavior of a partially written program before it can be executed, to judge if it is on the right track and predict where to search next. We introduce a general technique for representing partially written programs in a program synthesis engine. We take inspiration from the technique of abstract interpretation, in which an approximate execution model is used to determine if an unfinished program will eventually satisfy a goal specification. Here we \emph{learn} an approximate execution model implemented as a modular neural network. By constructing compositional program representations that implicitly encode the interpretation semantics of the underlying programming language, we can represent partial programs using a flexible combination of concrete execution state and learned neural representations, using the learned approximate semantics when concrete semantics are not known (in unfinished parts of the program). We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs, such as loops and higher-order functions, and can be used to synthesize programs more accurately for a given search budget than pure neural approaches in several domains.
2 Replies

Loading