Keywords: Efficient Benchmarking; Multimodal Large Language Models; Agent
Abstract: Evaluating multimodal large language models (MLLMs) is becoming increasingly expensive as benchmarks grow in scale and cross-modality complexity. Inspired by structuralism in cognitive psychology, we tackle this difficulty with an adaptive evaluation framework for efficient benchmarking, namely **AutoJudger**. Instead of passively scoring on a fixed test set, AutoJudger treats evaluation as an interview-like process by keeping a hypothesized ability structure of the evaluated model and actively selecting the informative questions so as to refine these ability boundaries. Specifically, AutoJudger has three core components: ***ability decomposition*** to organize evaluation along meaningful capability dimensions, ***ability estimation*** to maintain an up-to-date quantitative profile of the model competence, and ***adaptive question selection*** to choose the most informative questions.
To operationalize this paradigm, we introduce **$A^2$-Judger**, a novel MLLM-based **A**gentic instantiation of **A**uto**Judger** equipped with semantic-aware retrieval and dynamic memory.
Experiments on four representative multimodal benchmarks show that $A^2$-Judger significantly improves sample efficiency while maintaining reliable evaluation results.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking; evaluation methodologies; statistical testing for evaluation
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 9196
Loading