Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic Hiring

Published: 02 Jan 2025, Last Modified: 03 Mar 2025AAAI 2025 Workshop AIGOV PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fairness in Downstream Tasks, Algorithmic Hiring, Addressing Bias and Fairness
Abstract: Foundation models require fine-tuning to ensure their generative outputs align with intended results for specific tasks. Automating this fine-tuning process is challenging, as it typically needs human feedback that can be expensive to acquire. We present AutoRefine, a method that leverages reinforcement learning for targeted fine-tuning, utilizing direct feedback from measurable performance improvements in specific downstream tasks. We demonstrate the method for a problem arising in algorithmic hiring platforms where linguistic biases influence a recommendation system. In this setting, a generative model seeks to rewrite given job specifications to receive more diverse candidate matches from a recommendation engine which matches jobs to candidates. Our model detects and regulates biases in job descriptions to meet diversity and fairness criteria. The experiments on a public hiring dataset and a real-world hiring platform showcase how large language models can assist in identifying and mitigation biases in the real world.
Submission Number: 21
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview