FACTER: Fairness-Aware Conformal Thresholding and Prompt Engineering for Enabling Fair LLM-Based Recommender Systems
Abstract: We propose FACTER, a fairness-aware framework for LLM-based recommendation systems that integrates conformal prediction with dynamic prompt engineering. By introducing an adaptive semantic variance threshold and a violation-triggered mechanism, FACTER automatically tightens fairness constraints whenever biased patterns emerge. We further develop an adversarial prompt generator that leverages historical violations to reduce repeated demographic biases without retraining the LLM. Empirical results on MovieLens and Amazon show that FACTER substantially reduces fairness violations (up to 95.5%) while maintaining strong recommendation accuracy, revealing semantic variance as a potent proxy of bias.
Lay Summary: Modern AI systems, like large language models, are widely used in recommendations and decision-making but can sometimes produce biased or unfair results, such as favoring certain age groups or genders. Our work introduces FACTER, a method that detects and corrects such unfair behavior without retraining the model. By statistically monitoring model outputs and updating future prompts based on prior unfair responses, FACTER helps ensure more equitable outcomes while preserving model accuracy. It is efficient, easy to apply, and improves fairness across diverse real-world applications.
Link To Code: https://github.com/AryaFayyazi/FACTER
Primary Area: Social Aspects->Fairness
Keywords: Large Language Models, Conformal Prediction, Fairness, Repair, Embedding, Bias, Prompt Engineering, Adversarial
Submission Number: 13223
Loading