LLMUpdater: Automatic comment synchronization via edit model guided LLMs

Published: 2026, Last Modified: 19 Jan 2026Empir. Softw. Eng. 2026EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Automatic comment synchronization is essential especially in the collaborative development environment, since inconsistent comments may cause code misinterpretation, leading to bugs and higher maintenance costs. Existing studies employ neural machine translation models or rule-based heuristics. However, these methods typically demand extensive training or the creation of numerous rules to cover various scenarios, making them impractical for applications. To address the problems above, this paper proposes an automatic comment synchronization method with an edit model guided Large Language Models(LLMs). For the first time, we conduct empirical research to explore the capabilities of LLMs on comment synchronization, and we find: (1) LLMs have insufficient capability on comment synchronization; (2) providing old comments as guidance can significantly improve effectiveness; (3) better results can be achieved by prompting the LLMs with edit tags at different positions to indicate what modifications are needed. Inspired by the findings, we propose a new framework LLMUpdater, which models the comment synchronization task as a fill-in-the-blank problem. LLMUpdater first utilizes an edit model to mark the positions for edits in old comments and assign corresponding tags, leaving blank spaces between them. Then LLMUpdater guide LLMs to fill in the blanks for more accurate comment synchronization, by introducing additional edit knowledge, i.e., the old comments with editing tags, as new prompts. To verify the effectiveness of the LLMUpdater, we have implemented an editing model as an example and validated it on two widely used public datasets for the comment synchronization task. Moreover, a human-in-the-loop evaluator is proposed to evaluate the updated comments. The experimental results show that our proposed approach significantly improves without fine-tuning the LLMs. Our code is available online “(https://anonymous.4open.science/r/LLMUpdater-1F61)”.
Loading