SpectralKrum: A Spectral-Geometric Defense Against Byzantine Attacks in Federated Learning
By: Aditya Tripathi , Karan Sharma , Rahul Mishra and more
Federated Learning (FL) distributes model training across clients who retain their data locally, but this architecture exposes a fundamental vulnerability: Byzantine clients can inject arbitrarily corrupted updates that degrade or subvert the global model. While robust aggregation methods (including Krum, Bulyan, and coordinate-wise defenses) offer theoretical guarantees under idealized assumptions, their effectiveness erodes substantially when client data distributions are heterogeneous (non-IID) and adversaries can observe or approximate the defense mechanism. This paper introduces SpectralKrum, a defense that fuses spectral subspace estimation with geometric neighbor-based selection. The core insight is that benign optimization trajectories, despite per-client heterogeneity, concentrate near a low-dimensional manifold that can be estimated from historical aggregates. SpectralKrum projects incoming updates into this learned subspace, applies Krum selection in compressed coordinates, and filters candidates whose orthogonal residual energy exceeds a data-driven threshold. The method requires no auxiliary data, operates entirely on model updates, and preserves FL privacy properties. We evaluate SpectralKrum against eight robust baselines across seven attack scenarios on CIFAR-10 with Dirichlet-distributed non-IID partitions (alpha = 0.1). Experiments spanning over 56,000 training rounds show that SpectralKrum is competitive against directional and subspace-aware attacks (adaptive-steer, buffer-drift), but offers limited advantage under label-flip and min-max attacks where malicious updates remain spectrally indistinguishable from benign ones.
Similar Papers
Subgraph Federated Learning via Spectral Methods
Machine Learning (CS)
Keeps learning private while sharing data.
Operator-Theoretic Framework for Gradient-Free Federated Learning
Machine Learning (CS)
Makes AI learn from private data without sharing it.
Harnessing Sparsification in Federated Learning: A Secure, Efficient, and Differentially Private Realization
Cryptography and Security
Makes AI learn faster and safer from private data.