PluralLLM: Pluralistic Alignment in LLMs via Federated Learning
By: Mahmoud Srewa, Tianyu Zhao, Salma Elmalaki
Potential Business Impact:
Teaches AI to follow rules without seeing private data.
Ensuring Large Language Models (LLMs) align with diverse human preferences while preserving privacy and fairness remains a challenge. Existing methods, such as Reinforcement Learning from Human Feedback (RLHF), rely on centralized data collection, making them computationally expensive and privacy-invasive. We introduce PluralLLM a federated learning-based approach that enables multiple user groups to collaboratively train a transformer-based preference predictor without sharing sensitive data, which can also serve as a reward model for aligning LLMs. Our method leverages Federated Averaging (FedAvg) to aggregate preference updates efficiently, achieving 46% faster convergence, a 4% improvement in alignment scores, and nearly the same group fairness measure as in centralized training. Evaluated on a Q/A preference alignment task, PluralLLM demonstrates that federated preference learning offers a scalable and privacy-preserving alternative for aligning LLMs with diverse human values.
Similar Papers
A Systematic Evaluation of Preference Aggregation in Federated RLHF for Pluralistic Alignment of LLMs
Computation and Language
Helps AI learn what many different people like.
A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models
Computation and Language
Makes AI understand what you like best.
Steerable Pluralism: Pluralistic Alignment via Few-Shot Comparative Regression
Computation and Language
AI learns what *you* like, not just what's popular.