Fix Bugs with Transformer through a Neural-Symbolic Edit GrammarDownload PDF

04 Mar 2022, 03:33 (modified: 20 Apr 2022, 04:46)DL4C 2022Readers: Everyone
Keywords: Program Repair, Transformers, Machine Learning, Program Analysis
TL;DR: We introduce NSEdit (neural-symbolic edit), a novel Transformer-based code repair method that achieves the state-of-the-art performance.
Abstract: We introduce NSEdit (neural-symbolic edit), a novel Transformer-based code repair method. Given only the source code that contains bugs, NSEdit predicts an editing sequence that can fix the bugs. The edit grammar is formulated as a regular language, and the Transformer uses it as a neural-symbolic scripting interface to generate editing programs. We modify the Transformer and add a pointer network to select the edit locations. An ensemble of rerankers are trained to re-rank the editing sequences generated by beam search. We fine-tune the rerankers on the validation set to reduce over-fitting. NSEdit is evaluated on various code repair datasets and achieved a new state-of-the-art accuracy ($24.04\%$) on the Tufano small dataset of the CodeXGLUE benchmark. NSEdit performs robustly when programs vary from packages to packages and when buggy programs are concrete. We conduct detailed analysis on our methods and demonstrate the effectiveness of each component.
1 Reply