Keywords: Code Benchmark; Code LLMs; Cross Language Evaluation; Contamination; Overfitting
TL;DR: We introduce a contamination-aware benchmark that evaluates code LLMs across 12 programming languages.
Abstract: LiveCodeBench (LCB) has recently become a widely adopted benchmark for evaluating large language models (LLMs) on code-generation tasks. By curating competitive programming problems, constantly adding fresh problems to the set, and filtering them by release dates, LCB provides contamination-aware evaluation and offers a holistic view of coding capability. However, LCB remains restricted to Python, leaving open the question of whether LLMs can generalize across the diverse programming languages required in real-world software engineering.
We introduce Multi-LCB, a benchmark for evaluating LLMs across twelve programming languages, including Python.
Multi-LCB transforms Python tasks from the LCB dataset into equivalent tasks in other languages while preserving LCB’s contamination controls and evaluation protocol.
Because it is fully compatible with the original LCB format, Multi-LCB will automatically track future LCB updates, enabling systematic assessment of cross-language code generation competence and requiring models to sustain performance well beyond Python.
We evaluated 20 LLMs for instruction and reasoning on Multi-LCB, uncovering evidence of Python overfitting, language-specific contamination, and substantial disparities in multilingual performance. Our results establish Multi-LCB as a rigorous new benchmark for multi-programming-language code evaluation, directly addressing LCB’s primary limitation and exposing critical gaps in current LLM capabilities. All prompts, source code and experimental configurations are publicly available at https://anonymous.4open.science/r/Multi-LiveCodeBench-C627/
Primary Area: datasets and benchmarks
Submission Number: 232
Loading