Abstract: Solving financial problems demands complex reasoning, multimodal data processing, and a broad technical understanding, presenting unique challenges for current large language models (LLMs).
We introduce **XFinBench**, a novel benchmark with 4,235 examples designed to evaluate LLM's ability in solving comple**X**, knowledge-intensive **Fin**ancial problems across diverse graduate-level finance topics with multi-modal context.
We identify five core capabilities of LLMs using XFinBench, i.e., _terminology understanding_, _temporal reasoning_, _future forecasting_, _scenario planning_, and _numerical modelling_.
Upon XFinBench, we conduct extensive experiments on 18 leading models. The result shows that o1 is the best-performing text-only model with an overall accuracy of 67.3\%, but still lags significantly behind human experts with 12.5\%, especially in _temporal reasoning_ and _scenario planning_ capabilities.
We further construct a knowledge bank with 3,032 finance terms for knowledge augmentation analysis, and find that relevant knowledge to the question only brings consistent accuracy improvements to small open-source model. Additionally, our error analysis reveals that rounding errors during calculation and blindness to position and intersection of curves in the image are two primary issues leading to model's poor performance in calculating and visual-context questions, respectively.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking; large multimodal models; financial reasoning; mathematical reasoning; foundation models and their evaluations
Contribution Types: Data resources
Languages Studied: English
Submission Number: 1909
Loading