Score: 0

Federated Large Language Models: Feasibility, Robustness, Security and Future Directions

Published: May 13, 2025 | arXiv ID: 2505.08830v1

By: Wenhao Jiang , Yuchuan Luo , Guilin Deng and more

Potential Business Impact:

Lets AI learn from private data safely.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The integration of Large Language Models (LLMs) and Federated Learning (FL) presents a promising solution for joint training on distributed data while preserving privacy and addressing data silo issues. However, this emerging field, known as Federated Large Language Models (FLLM), faces significant challenges, including communication and computation overheads, heterogeneity, privacy and security concerns. Current research has primarily focused on the feasibility of FLLM, but future trends are expected to emphasize enhancing system robustness and security. This paper provides a comprehensive review of the latest advancements in FLLM, examining challenges from four critical perspectives: feasibility, robustness, security, and future directions. We present an exhaustive survey of existing studies on FLLM feasibility, introduce methods to enhance robustness in the face of resource, data, and task heterogeneity, and analyze novel risks associated with this integration, including privacy threats and security challenges. We also review the latest developments in defense mechanisms and explore promising future research directions, such as few-shot learning, machine unlearning, and IP protection. This survey highlights the pressing need for further research to enhance system robustness and security while addressing the unique challenges posed by the integration of FL and LLM.

Country of Origin
🇨🇳 China

Page Count
35 pages

Category
Computer Science:
Cryptography and Security