Score: 0

PluralLLM: Pluralistic Alignment in LLMs via Federated Learning

Published: March 13, 2025 | arXiv ID: 2503.09925v1

By: Mahmoud Srewa, Tianyu Zhao, Salma Elmalaki

Potential Business Impact:

Teaches AI to follow rules without seeing private data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Ensuring Large Language Models (LLMs) align with diverse human preferences while preserving privacy and fairness remains a challenge. Existing methods, such as Reinforcement Learning from Human Feedback (RLHF), rely on centralized data collection, making them computationally expensive and privacy-invasive. We introduce PluralLLM a federated learning-based approach that enables multiple user groups to collaboratively train a transformer-based preference predictor without sharing sensitive data, which can also serve as a reward model for aligning LLMs. Our method leverages Federated Averaging (FedAvg) to aggregate preference updates efficiently, achieving 46% faster convergence, a 4% improvement in alignment scores, and nearly the same group fairness measure as in centralized training. Evaluated on a Q/A preference alignment task, PluralLLM demonstrates that federated preference learning offers a scalable and privacy-preserving alternative for aligning LLMs with diverse human values.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)