VisFinEval: A Scenario-Driven Chinese Multimodal Benchmark for Holistic Financial Understanding

ACL ARR 2025 May Submission7127 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multimodal large language models (MLLMs) hold great promise for automating complex financial analysis. To comprehensively evaluate their capabilities, we introduce VisFinEval, the first large‐scale Chinese benchmark that spans the full front‐middle‐back office lifecycle of financial tasks. VisFinEval comprises 15,848 annotated question–answer pairs drawn from eight common financial image modalities (e.g., K-line charts, financial statements, official seals), organized into three hierarchical scenario depths: Financial Knowledge \& Data Analysis, Financial Analysis \& Decision Support, and Financial Risk Control \& Asset Optimization. We evaluate 21 state-of-the-art MLLMs in a zero-shot setting. The top model, Qwen-VL-max, achieves an overall accuracy of 76.3\%, outperforming non-expert humans but trailing financial experts by over 14 percentage points. Our error analysis uncovers six recurring failure modes—including cross-modal misalignment, hallucinations, and lapses in business-process reasoning—that highlight critical avenues for future research. VisFinEval aims to accelerate the development of robust, domain-tailored MLLMs capable of seamlessly integrating textual and visual financial information. The data and the code are available at \url{https://anonymous.4open.science/r/VisFinEval-626E}
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: finance, multimodal large language model, multimodal QA, knowledge base QA, logical reasoning QA, open-domain QA
Contribution Types: Data resources, Data analysis
Languages Studied: English, Chinese
Submission Number: 7127
Loading