Evaluating Cross-Modal Reasoning Ability and Problem Charactaristics with Multimodal Item Response Theory
Keywords: VLM, Evaluation, IRT
Abstract: Multimodal Large Language Models (MLLMs) have recently emerged as general architectures capable of reasoning over diverse modalities.
Benchmarks for MLLMs should measure their ability for cross‑modal integration. However, current benchmarks are filled with shortcut questions, which can be solved using only single modality, and thereby yielding unreliable rankings.
For example, in vision-language cases, we can find the correct answer without either the image or the text.
These low-quality questions unnecessarily increase the size and computational requirements of benchmarks.
We introduce a multi-modal and multidimensional item response theory framework (M$^3$-IRT) that extends classical IRT by decomposing both model ability and item difficulty into image‑only, text‑only, and cross‑modal components. M$^3$-IRT estimates cross‑modal ability of MLLMs and each question’s cross‑modal difficulty, enabling compact, high‑quality subsets that better reflect multimodal reasoning.
Across 24 VLMs on three benchmarks, M$^3$-IRT prioritizes genuinely cross‑modal questions over shortcuts and preserves ranking fidelity even when 50\% of items are artificially generated low‑quality questions, thereby reducing evaluation cost while improving reliability. M$^3$-IRT thus offers a practical tool for assessing cross‑modal reasoning and refining multimodal benchmarks.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 10457
Loading