Score: 0

Federated Learning with Feedback Alignment

Published: December 14, 2025 | arXiv ID: 2512.12762v1

By: Incheol Baek , Hyungbin Kim , Minseo Kim and more

Potential Business Impact:

Helps computers learn together without sharing private data.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Federated Learning (FL) enables collaborative training across multiple clients while preserving data privacy, yet it struggles with data heterogeneity, where clients' data are not distributed independently and identically (non-IID). This causes local drift, hindering global model convergence. To address this, we introduce Federated Learning with Feedback Alignment (FLFA), a novel framework that integrates feedback alignment into FL. FLFA uses the global model's weights as a shared feedback matrix during local training's backward pass, aligning local updates with the global model efficiently. This approach mitigates local drift with minimal additional computational cost and no extra communication overhead. Our theoretical analysis supports FLFA's design by showing how it alleviates local drift and demonstrates robust convergence for both local and global models. Empirical evaluations, including accuracy comparisons and measurements of local drift, further illustrate that FLFA can enhance other FL methods demonstrating its effectiveness.

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)