A Structured Framework for Evaluating and Enhancing Interpretive Capabilities of Multimodal LLMs in Culturally Situated Tasks

ACL ARR 2025 May Submission2019 Authors

18 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This study aims to test and evaluate the capabilities and characteristics of current mainstream Visual Language Models (VLMs) in generating critiques for traditional Chinese painting. To achieve this, we first developed a quantitative framework for Chinese painting critique. This framework was constructed by extracting multi-dimensional evaluative features—including evaluative stance, core focal points, and argumentative quality—from human expert critiques using a zero-shot classification model. Based on these features, several representative critic personas were defined and quantified. This framework was then employed to evaluate selected VLMs (e.g., Gemini 2.5 Pro). The experimental design involved persona-guided prompting to assess the VLM's ability to generate critiques from diverse perspectives. Our findings reveal the current performance levels, strengths, and areas for improvement of VLMs in the domain of art critique, offering insights into their potential and limitations in complex semantic understanding and content generation tasks.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: Interpretability and Analysis of Models for NLP, Multimodality and Language Grounding to Vision, Robotics and Beyond, Resources and Evaluation, Computational Social Science and Cultural Analytics, Human-Centered NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: Chinese, English
Keywords: Interpretability and Analysis of Models for NLP, Multimodality and Language Grounding to Vision, Robotics and Beyond, Resources and Evaluation, Computational Social Science and Cultural Analytics, Human-Centered NLP
Submission Number: 2019
Loading