Abstract: Large Language Models (LLMs) have emerged as the new recommendation engines, surpassing traditional methods in both capability and scope, particularly in code generation.
In this paper, we reveal a novel \textit{provider bias} in LLMs: without explicit directives, these models show systematic preferences for services from specific providers in their recommendations (\eg, favoring Google Cloud over Microsoft Azure).
To systematically investigate this bias, we develop an automated pipeline to construct the dataset, incorporating 6 distinct coding task categories and 30 real-world application scenarios.
Leveraging this dataset, we conduct the {\bf first} comprehensive empirical study of provider bias in LLM code generation across seven state-of-the-art LLMs, utilizing approximately 500 million tokens (equivalent to \$5,000+ in computational costs).
Our findings reveal that LLMs exhibit significant provider preferences, predominantly favoring services from Google and Amazon, and can autonomously modify input code to incorporate their preferred providers without users' requests.
Such a bias holds far-reaching implications for market dynamics and societal equilibrium, potentially contributing to digital monopolies.
It may also deceive users and violate their expectations, leading to various consequences.
We call on the academic community to recognize this emerging issue and develop effective evaluation and mitigation methods to uphold AI security and fairness.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: model bias evaluation, model bias mitigation, code generation and understanding
Contribution Types: Publicly available software and/or pre-trained models, Data resources, Data analysis
Languages Studied: English
Submission Number: 5158
Loading