Keywords: Adversarial Robustness, Adversarial Attacks, Program Synthesis, Deep Learning
TL;DR: In this work, we tackle the issue of adversarial robustness in context of program synthesis
Abstract: The resurgence of automatic program synthesis has been observed with the rise of deep learning. In this paper, we study the behaviour of the program synthesis model under adversarial settings. Our experiments suggest that these program synthesis models are prone to adversarial attacks. The proposed transformer model has higher adversarial performance than the current state-of-the-art program synthesis model. We specifically experiment with AlgoLisp DSL-based generative models and showcase the existence of significant dataset bias through different classes of adversarial examples.