Prospect Theory Fails for LLMs: Revealing Instability of Decision-Making under Epistemic Uncertainty
Keywords: Prospect Theory, LLM, Uncertainty
Abstract: Prospect Theory (PT) models the human decision-making tendency under uncertainty. While recent studies have developed some questionnaires to elicit the PT parameters (features evaluating decision-making tendency) to describe decision behavior of Large Language Models (LLM), many of them did not report the performance (or explanatory power) of PT itself for LLMs. Additionally, although PT has been used in many LLM-related fields, few studies have tried to test its robustness under linguistic uncertainty, especially epistemic markers (e.g. "maybe"). To address these research gaps, we design an experiment workflow. We adopt a classic economic questionnaire and perform parameter estimation with performance metrics (e.g. McFadden $R^2$). We further let LLMs make binary choices which reflect the internal probability values of epistemic markers. We then incorporate epistemic markers into the questionnaire to examine the robustness of Prospect Theory parameters. Our findings suggest that modeling LLMs' decision-making with PT is not consistently reliable, and applying Prospect Theory to LLMs is likely not robust under epistemic uncertainty.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: robustness, calibration/uncertainty
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 10323
Loading