NextCoder: Robust Adaptation of Code LMs to Diverse Code Edits

Published: 06 Mar 2025, Last Modified: 19 Apr 2025DL4C @ ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 9 pages)
Keywords: Code-LMs, code-editing, code-generation, software engineering
TL;DR: This paper introduce a synthetic data generation pipeline and a robust model adaptation algorithm to train models for diverse code-editing tasks without losing their original code generation abilities.
Abstract: Software engineering activities frequently involve edits to existing code. However, contemporary code language models (LMs) lack the ability to handle diverse types of code-edit requirements. In this work, we attempt to overcome this shortcoming through (1) a novel synthetic data generation pipeline and (2) a robust model adaptation algorithm. Starting with seed code examples and diverse editing criteria, our pipeline generates high-quality samples comprising original and modified code, along with natural language instructions in different styles and verbosity. Today's code LMs come bundled with strong abilities, such as code generation and instruction following, which should not be lost due to fine-tuning. To ensure this, we propose a novel adaptation algorithm, SeleKT, that (a) leverages a dense gradient-based step to identify the weights that are most important for code editing, and (b) does a sparse projection onto the base model to avoid overfitting. Using our approach, we obtain a new model NextCoder (adapted from Qwen2.5-Coder-7B) that achieves strong results on four code-editing benchmarks, outperforming comparable size models and even several larger ones. We show the generality of our approach by improving DeepSeekCoder-6.7B and Qwen2.5-Coder-7B, compare against other fine-tuning approaches, and demonstrate robustness by showing retention of code generation abilities post adaptation.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 24
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview