A Systematic Evaluation of Preference Aggregation in Federated RLHF for Pluralistic Alignment of LLMs
By: Mahmoud Srewa, Tianyu Zhao, Salma Elmalaki
This paper addresses the challenge of aligning large language models (LLMs) with diverse human preferences within federated learning (FL) environments, where standard methods often fail to adequately represent diverse viewpoints. We introduce a comprehensive evaluation framework that systematically assesses the trade-off between alignment quality and fairness when using different aggregation strategies for human preferences. In our federated setting, each group locally evaluates rollouts and produces reward signals, and the server aggregates these group-level rewards without accessing any raw data. Specifically, we evaluate standard reward aggregation techniques (min, max, and average) and introduce a novel adaptive scheme that dynamically adjusts preference weights based on a group's historical alignment performance. Our experiments on question-answering (Q/A) tasks using a PPO-based RLHF pipeline demonstrate that our adaptive approach consistently achieves superior fairness while maintaining competitive alignment scores. This work offers a robust methodology for evaluating LLM behavior across diverse populations and provides a practical solution for developing truly pluralistic and fairly aligned models.
Similar Papers
PluralLLM: Pluralistic Alignment in LLMs via Federated Learning
Machine Learning (CS)
Teaches AI to follow rules without seeing private data.
Maximizing the efficiency of human feedback in AI alignment: a comparative analysis
Human-Computer Interaction
Teaches AI to learn faster from people's choices.
Maximizing the efficiency of human feedback in AI alignment: a comparative analysis
Human-Computer Interaction
Teaches computers to learn what people like faster.