How Trustworthy is AI? A Deep Dive into the Bias in LLM-Based Recommendations

ACL ARR 2024 June Submission3595 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Model (LLM)-based recommendation systems provide more comprehensive recommendations than traditional systems by deeply analyzing content and user behavior. However, these systems often exhibit biases, favoring mainstream content while marginalizing non-traditional options due to skewed training data. This study investigates the intricate relationship between bias and LLM-based recommendation systems, with a focus on music, song, and book recommendations across diverse demographic and cultural groups. Through a comprehensive analysis, this paper evaluates the impact of bias on recommendation outcomes and assesses various strategies, such as prompt engineering and hyperparameter optimization, for bias mitigation. Our findings indicate that neither prompt engineering nor hyperparameter optimization are particularly effective in mitigating biases, highlighting the need for further research in this area.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Ethics, Bias, and Fairness, Computational Social Science and Cultural Analytics, Dialogue and Interactive Systems, Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 3595
Loading