Keywords: Code Generation, Large Language Model, Preference Learning, Evaluation
TL;DR: We train models and build benchmarks to predict code preference towards verifiable properties and human preference.
Abstract: Large Language Models (LLMs) have recently demonstrated remarkable coding capabilities.
However, assessing code generation based on well-formed properties and aligning it with developer preferences remains challenging.
In this paper, we explore two key questions under the new challenge of code preference learning:
(i) How do we train models to predict meaningful preferences for code? and
(ii) How do human and LLM preferences align with verifiable code properties and developer code tastes?
To this end, we propose CodeFavor,
a framework for training pairwise code preference models from synthetic evolution data,
including code commits and code critiques.
To evaluate code preferences,
we introduce CodePrefBench, a benchmark comprising 1364 rigorously curated code preference tasks to cover three verifiable properties—correctness, efficiency, and security—along with human preference.
Our evaluation shows that CodeFavor holistically improves the accuracy of model-based code preferences by up to $28.8$%.
Meanwhile, CodeFavor models can match the performance of models with $6\sim 9\times$ more parameters
while being $34\times$ more cost-effective.
We also rigorously validate the design choices in CodeFavor via a comprehensive set of controlled experiments.
Furthermore, we discover the prohibitive costs and limitations of human-based code preference:
despite spending 23.4 person-minutes on each task, $15.1\sim 40.3$% of tasks remain unsolved.
Compared to model-based preference,
human preference tends to be more accurate under the objective of code correctness,
while being sub-optimal for non-functional objectives.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9214
Loading