A Systematic Evaluation of Preference Aggregation in Federated RLHF for Pluralistic Alignment of LLMs
Keywords: LLM, fairness, RLHF, federated learning, pluralistic alignment, preference aggregation, adaptive optimization, PPO, group preferences
TL;DR: We evaluate aggregation strategies in federated RLHF and introduce adaptive alpha aggregation, which dynamically weights groups to improve fairness while preserving alignment in pluralistic LLM alignment.
Abstract: This paper addresses the challenge of aligning Large Language Models (LLMs) with diverse human preference within Federated Learning (FL) environments, where standard methods often fail to adequately represent diverse viewpoints.
We introduce a comprehensive evaluation framework that systematically assesses the trade-off between alignment quality and fairness when using different aggregation strategies for human preferences.
Specifically, we evaluate standard aggregation techniques—Min, Max, and Average—and introduce a novel adaptive scheme that dynamically adjusts preference weights based on a group's historical alignment performance. Our experiments on Q/A tasks using a PPO-based RLHF pipeline demonstrate that our adaptive approach consistently achieves superior fairness, while maintaining competitive alignment scores. This work offers a robust methodology for evaluating LLM behavior across diverse populations and provides a practical solution for developing truly pluralistic and fairly aligned models.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 19342
Loading