Robustness evaluation of code generation systems via concretizing instructions

Published: 01 Jan 2025, Last Modified: 30 Apr 2025Inf. Softw. Technol. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Context:Code generation systems have been extensively developed in recent years to generate source code based on natural language instructions. However, despite their advancements, these systems still face robustness issues where even slightly different instructions can result in significantly different code semantics. Robustness is critical for code generation systems, as it can have significant impacts on software development, software quality, and trust in the generated code. Although existing testing techniques for general text-to-text software can detect some robustness issues, they can produce many false positives and are limited in effectiveness due to ignoring the characteristics of this kind of systems.Objective:To better evaluate (and further enhance) the robustness of code generation systems, in this work, we conducted the first exploration by carefully considering the characteristics of code generation systems. Specifically, we propose such a novel technique (called COCO) and perform an extensive study to evaluate the robustness of code generation systems with COCO.Method:COCO exploits the usage scenario of code generation systems to make the original programming instruction more concrete by incorporating features known to be present in the original code. A robust system should maintain code semantics for the concretized instruction, and COCO detects robustness inconsistencies when it does not. In the extensive study, we evaluated the robustness of eight advanced code generation systems (including commercial tools Copilot and ChatGPT) with COCO, using two widely-used datasets.Results:Our results demonstrate the effectiveness of COCO. It does not produce any false positive, ensuring the accuracy of robustness evaluation. Additionally, it outperforms the two baselines adopted from general text-to-text software testing, detecting 440.31% and 95.81% more inconsistencies, respectively. Concretized instructions generated by COCO can further help reduce robustness inconsistencies by 21.90% to 60.18% via fine-tuning.Conclusions:COCO is effective in detecting robust inconsistencies in code generation systems and significantly outperforms baselines. Additionally, fine-tuning code generation systems with the concretized instructions provided by COCO can largely enhance their robustness.
Loading