PatchZero: Zero-Shot Automatic Patch Correctness Assessment

Published: 01 Jan 2023, Last Modified: 12 May 2025CoRR 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Automated Program Repair (APR) techniques have shown more and more promising results in fixing real-world bugs. Despite the effectiveness, APR techniques still face an overfitting problem: a generated patch can be incorrect although it passes all tests. It is time-consuming to manually evaluate the correctness of generated patches that can pass all tests. To address this problem, many approaches have been proposed to automatically assess the correctness of patches generated by APR techniques. These approaches are mainly evaluated within the cross-validation setting. However, for patches generated by a new or unseen APR tool, users are implicitly required to manually label a significant portion of these patches in the cross-validation setting before inferring the remaining patches. To mitigate the issue, in this study, we propose \toolname, the patch correctness assessment by adopting a large language model for code. Specifically, for patches generated by a new or unseen APR tool, \toolname does not need labeled patches of this new or unseen APR tool for training but directly queries the large language model for code to get predictions on the correctness labels without training. In this way, \toolname can reduce the manual labeling effort when building a model to automatically assess the correctness of generated patches of new APR tools. \toolname prioritizes labeled patches from existing APR tools that exhibit semantic similarity to those generated by new APR tools, enhancing the accuracy achieved by \toolname for patches from new APR tools. Our experimental results showed that \toolname can achieve an accuracy of 84.4% and an F1-score of 86.5% on average although no labeled patch of the new or unseen APR tool is available. In addition, our proposed technique outperformed the prior state-of-the-art by a large margin.
Loading