Fairness Perceptions of Large Language Models

Published: 12 Jun 2025, Last Modified: 15 Aug 2025CFD 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fair Division, LLMs
TL;DR: We look at how LLMs perform at fair division.
Abstract: Large language models (LLMs) are increasingly used for decision-making tasks where fairness is an essential desideratum. But what does fairness even mean to an LLM? To investigate this, we conduct a comprehensive evaluation of how LLMs perceive fairness in the context of resource allocation, using both synthetic and real-world data. We observe that various state-of-the-art LLMs, when asked to be fair, prioritize improving collective welfare over distributing benefits equally. Their perception of fairness is somewhat sensitive to how user preferences are provided, but less so to the real-world context of the decision-making task. Finally, we show that the best strategy for aligning an LLM's perception of fairness to a specific criterion is to provide it as a mathematical objective, without referencing ``fairness'', as this prevents the LLM from mixing the given criterion with its prior notions of fairness. Our results provide practical insights regarding when to use LLMs for fair decision-making and when using traditional algorithms may be more appropriate.
Submission Number: 12
Loading