Keywords: deep learning, natural language processing, program synthesis, large language models, reinforcement learning
TL;DR: Language Models can use reinforcement learning to generate Programming Puzzles and Solutions, which can be scored for correctness and used to finetune the LLM to improve its performance
Abstract: Recent Language Models (LMs) achieve breakthrough performance in code generation when trained on human-authored problems, even solving some competitive-programming problems. Self-play has proven useful in games such as Go, and thus it is natural to ask whether LMs can generate their own instructive programming problems to improve their performance. We show that it is possible for an LM to synthesize programming problems and solutions, which are filtered for correctness by a Python interpreter. The LM’s performance is then seen to improve when it is fine-tuned on its own synthetic problems and verified solutions; thus the model “improves itself” using the Python interpreter. Problems are specified formally as programming puzzles [Schuster et al., 2021], a code-based problem format where solutions can easily be verified for correctness by execution. In experiments on publicly-available LMs, test accuracy more than doubles. This RL approach demonstrates the potential for code LMs, with an interpreter, to generate instructive problems and improve their own performance.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/language-models-can-teach-themselves-to/code)
0 Replies
Loading