Behavioral Bias of Vision-Language Models: A Behavioral Finance View

ACL ARR 2024 June Submission4112 Authors

16 Jun 2024 (modified: 03 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Vision-Language Models (LVLMs) evolve rapidly as Large Language Models (LLMs) was equipped with vision modules to create more human-like models. However, we should carefully evaluate their applications in different domains, as harmful biases may occur. Our work studies the potential behavioral biases of LVLMs from a behavioral finance perspective, an interdisciplinary area that jointly considers finance and psychology. We propose an end-to-end framework, from data collection to new evaluation metrics, to assess LVLM's reasoning capabilities and dynamic behaviors manifested in two established human financial behavioral biases: recency bias and authority bias. Our evaluations find that recent open-source LVLMs such as LLaVA-NeXT, MobileVLM-V2, Mini-Gemini, MiniCPM-Llama3-V 2.5 and Phi-3-vision suffer significantly from these two biases, while the proprietary model GPT-4o is negligibly impacted. This highlights a direction in which open-source models can improve.
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Behavioral Finance, behavioral bias
Contribution Types: Data resources, Data analysis
Languages Studied: English
Submission Number: 4112
Loading