Keywords: Adaptive testing, LLM evaluation
Abstract: Large language model evaluation requires thousands of benchmark items, making evaluations expensive and slow. Existing methods compute average accuracy across fixed item sets, treating all items equally despite varying quality and informativeness. We present ATLAS, an adaptive testing framework using Item Response Theory (IRT) to estimate model ability through Fisher information-guided item selection. Our analysis of five major benchmarks reveals that 3-6\% of items exhibit negative discrimination, indicating annotation errors that corrupt static evaluation. ATLAS achieves 90\% item reduction while maintaining measurement precision: on HellaSwag (5,608 items), we match full-benchmark estimates using only 42 items with 0.154 MAE. Our framework maintains item exposure rates below 10\% and test overlap at 16-27\%, compared to static benchmarks where every model sees all items (100\% exposure). Among 4,000+ tested models, IRT ranks differ from accuracy ranks: models with the same accuracy get different IRT scores, and 23-31\% of all models shift by more than 10 rank positions. Code and calibrated item banks available at https://anonymous.4open.science/r/ATLAS-3210/README.md.
Primary Area: datasets and benchmarks
Submission Number: 20143
Loading