Score: 0

Federated Learning Survey: A Multi-Level Taxonomy of Aggregation Techniques, Experimental Insights, and Future Frontiers

Published: November 27, 2025 | arXiv ID: 2511.22616v1

By: Meriem Arbaoui , Mohamed-el-Amine Brahmia , Abdellatif Rahmoun and more

Potential Business Impact:

Lets computers learn together without sharing secrets.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The integration of IoT and AI has unlocked innovation across industries, but growing privacy concerns and data isolation hinder progress. Traditional centralized ML struggles to overcome these challenges, which has led to the rise of Federated Learning (FL), a decentralized paradigm that enables collaborative model training without sharing local raw data. FL ensures data privacy, reduces communication overhead, and supports scalability, yet its heterogeneity adds complexity compared to centralized approaches. This survey focuses on three main FL research directions: personalization, optimization, and robustness, offering a structured classification through a hybrid methodology that combines bibliometric analysis with systematic review to identify the most influential works. We examine challenges and techniques related to heterogeneity, efficiency, security, and privacy, and provide a comprehensive overview of aggregation strategies, including architectures, synchronization methods, and diverse federation objectives. To complement this, we discuss practical evaluation approaches and present experiments comparing aggregation methods under IID and non-IID data distributions. Finally, we outline promising research directions to advance FL, aiming to guide future innovation in this rapidly evolving field.

Country of Origin
🇫🇷 France

Page Count
65 pages

Category
Computer Science:
Machine Learning (CS)