Code Synthesis with Priority Queue Training

Daniel A. Abolafia, Quoc V. Le, Mohammad Norouzi

Feb 15, 2018 (modified: Feb 15, 2018) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards. We introduce a novel iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generated programs so far. Then, we synthesize new programs and add them to the priority queue by sampling from the RNN. We benchmark our algorithm called priority queue training (PQT) against genetic algorithm and reinforcement learning baselines on a simple but expressive Turing complete programming language called BF. Our experimental results show that our deceptively simple PQT algorithm significantly outperforms the baselines. By adding a program length penalty to the reward function, we are able to synthesize short, human readable programs.
  • TL;DR: We use a simple search algorithm involving an RNN and priority queue to find solutions to coding tasks.
  • Keywords: code synthesis, program synthesis, genetic algorithm, reinforcement learning, policy gradient, reinforce, priority queue, topk buffer, bf, code golf, rnn