Beyond Accuracy: Realistic and Diagnostic Evaluation of Code Generation Models

Published: 22 Sept 2025, Last Modified: 25 Nov 2025DL4C @ NeurIPS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Code Generation, Large Language Models, Benchmarking, Ecological Validity, Contamination Resistance, LLM-Judge Evaluation, Cross-Language Evaluation
TL;DR: A synthetic, telemetry-driven benchmark for realistic code completion, spanning six languages and six task categories. Emphasizes ecological validity, contamination resistance, and multi-metric evaluation
Abstract: DevBench is a telemetry-driven benchmark designed to evaluate Large Language Models (LLMs) on realistic code completion tasks. It includes 1,800 evaluation instances across six programming languages and six task categories derived from real developer telemetry, such as API usage and code purpose understanding. Unlike prior benchmarks, it emphasizes ecological validity, avoids training data contamination, and enables detailed diagnostics. The evaluation combines functional correctness, similarity-based metrics, and LLM-judge assessments focused on usefulness and contextual relevance. 11 state-of-the-art models were assessed, revealing differences in syntactic precision, semantic reasoning, and practical utility. Our benchmark provides actionable insights to guide model selection and improvement—detail that is often missing from other benchmarks but is essential for both practical deployment and targeted model development.
Submission Number: 77
Loading