Keywords: Transformers, Deep Learning, Program Synthesis
TL;DR: We tackle the problem of program synthesis from two variants of transformer model
Abstract: Synthesizing programs from natural language descriptions is a challenging task. In this paper, we leverage the power of transformer-based language models for the task of program synthesis. We experiment with two variants of transformers and showcase their superior performance than the existing SOTA models. We also discuss the qualitative differences in the learned representation of these two variants. Finally, we compared both these models through the lens of " degree of memorization" and demonstrated that the vanilla transformer model has a higher affinity towards memorizing the training data than the other variant.
1 Reply
Loading