How Robust are Neural Code Completion Models to Source Code Transformation?Download PDF

Anonymous

04 Mar 2022 (modified: 05 May 2023)ICLR 2022 Workshop DL4C Blind SubmissionReaders: Everyone
Keywords: code completion, refactoring, robustness, testing, evaluation, self-supervision
TL;DR: We study the effect of source code transformations on the robustness of neural code completion models.
Abstract: Neural language models hold great promise as tools for computer-aided programming, but questions remain over their reliability and the consequences of overreliance. In the domain of natural language, prior work has revealed these models can be sensitive to naturally-occurring variance and malfunction in unpredictable ways. A more methodical examination is necessary to understand their behavior on programming-related tasks. In this work, we develop a methodology for systematically evaluating neural code completion models using common source code transformations. We measure the distributional shift induced by applying those transformations to a dataset of handwritten code fragments on four pretrained models, which exhibit varying degrees of robustness under transformation. Preliminary results from those experiments and observations from a qualitative analysis suggest that while these models are promising, they should not be relied upon uncritically. Our analysis provides insights into the strengths and weaknesses of different models, and serves as a foundation for future work towards improving the accuracy and robustness of neural code completion.
1 Reply

Loading