ML²B: Multi-Lingual ML Benchmark For AutoML

ICLR 2026 Conference Submission20002 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multilingual machine learning, Large language models, Cross-lingual representation learning, Code generation, Machine learning workflows, Benchmark dataset
Abstract: Large language models (LLMs) have recently demonstrated strong capabilities in generating machine learning (ML) code, enabling end-to-end pipeline construction from natural language instructions. However, existing benchmarks for ML code generation are mainly restricted to English, overlooking the global and multilingual nature of ML research and practice. To address this gap, we present ML²B, the first benchmark for evaluating multilingual ML code generation. ML²B consists of 30 Kaggle competitions in 13 natural languages, covering tabular, text, and image data types, with structured metadata and validated human-reviewed translations. For evaluation, we employ AIDE, an automated framework for end-to-end assessment of data science pipelines, and provide observations into the cross-lingual model performance.
Primary Area: datasets and benchmarks
Submission Number: 20002
Loading