Abstract: Multimodal Large Language Models (MLLMs) have garnered significant attention for their strong visual-semantic understanding. Most existing chart benchmarks evaluate MLLMs’ ability to parse information from charts to answer questions. However, they overlook the inherent data bias of MLLMs, where models rely on their parametric memory to answer questions rather than genuinely understanding the chart content. To address this limitation, we introduce a novel Chart Hypothetical Question Answering (HQA) task, which imposes assumptions on the same question to compel models to engage in counterfactual reasoning based on the chart content. Furthermore, we introduce HAI, a human-AI interactive data synthesis approach that leverages the efficient text-editing capabilities of LLMs alongside human expert knowledge to generate diverse and high-quality HQA data at a low cost. Using HAI, we construct Chart-HQA, a challenging benchmark synthesized from publicly available data sources. Evaluation results on 18 MLLMs of varying sizes reveal that current models face significant generalization challenges and exhibit imbalanced reasoning performance in HQA tasks. Our codebase and newly generated datasets are available at https://anonymous.4open.science/r/Chart-HQA-86BE
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: human-AI interaction, benchmarking,evaluation
Contribution Types: Reproduction study, Data resources, Data analysis
Languages Studied: English
Submission Number: 7854
Loading