Heterogeneity-Oblivious Robust Federated Learning
By: Weiyao Zhang , Jinyang Li , Qi Song and more
Potential Business Impact:
Protects smart learning from bad data.
Federated Learning (FL) remains highly vulnerable to poisoning attacks, especially under real-world hyper-heterogeneity, where clients differ significantly in data distributions, communication capabilities, and model architectures. Such heterogeneity not only undermines the effectiveness of aggregation strategies but also makes attacks more difficult to detect. Furthermore, high-dimensional models expand the attack surface. To address these challenges, we propose Horus, a heterogeneity-oblivious robust FL framework centered on low-rank adaptations (LoRAs). Rather than aggregating full model parameters, Horus inserts LoRAs into empirically stable layers and aggregates only LoRAs to reduce the attack uncover a key empirical observation that the input projection (LoRA-A) is markedly more stable than the output projection (LoRA-B) under heterogeneity and poisoning. Leveraging this, we design a Heterogeneity-Oblivious Poisoning Score using the features from LoRA-A to filter poisoned clients. For the remaining benign clients, we propose projection-aware aggregation mechanism to preserve collaborative signals while suppressing drifts, which reweights client updates by consistency with the global directions. Extensive experiments across diverse datasets, model architectures, and attacks demonstrate that Horus consistently outperforms state-of-the-art baselines in both robustness and accuracy.
Similar Papers
Heterogeneity-Oblivious Robust Federated Learning
Machine Learning (CS)
Protects AI learning from bad data.
ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation
Machine Learning (CS)
Fixes AI learning when data is different.
FedHL: Federated Learning for Heterogeneous Low-Rank Adaptation via Unbiased Aggregation
Machine Learning (CS)
Makes AI learn better from many sources.