Robust Federated Learning for Malicious Clients using Loss Trend Deviation Detection
By: Deepthy K Bhaskar, Minimol B, Binu V P
Potential Business Impact:
Stops bad data from messing up shared learning.
Federated Learning (FL) facilitates collaborative model training among distributed clients while ensuring that raw data remains on local devices.Despite this advantage, FL systems are still exposed to risks from malicious or unreliable participants. Such clients can interfere with the training process by sending misleading updates, which can negatively affect the performance and reliability of the global model. Many existing defense mechanisms rely on gradient inspection, complex similarity computations, or cryptographic operations, which introduce additional overhead and may become unstable under non-IID data distributions. In this paper, we propose the Federated Learning with Loss Trend Detection (FL-LTD), a lightweight and privacy-preserving defense framework that detects and mitigates malicious behavior by monitoring temporal loss dynamics rather than model gradients. The proposed approach identifies anomalous clients by detecting abnormal loss stagnation or abrupt loss fluctuations across communication rounds. To counter adaptive attackers, a short-term memory mechanism is incorporated to sustain mitigation for clients previously flagged as anomalous, while enabling trust recovery for stable participants. We evaluate FL-LTD on a non-IID federated MNIST setup under loss manipulation attacks. Experimental results demonstrate that the proposed method significantly enhances robustness, achieving a final test accuracy of 0.84, compared to 0.41 for standard FedAvg under attack. FL-LTD incurs negligible computational and communication overhead, maintains stable convergence, and avoids client exclusion or access to sensitive data, highlighting the effectiveness of loss-based monitoring for secure federated learning.
Similar Papers
Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
Machine Learning (CS)
Protects smart learning from bad data.
Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
Machine Learning (CS)
Keeps AI learning safe from bad data.
Toward Malicious Clients Detection in Federated Learning
Cryptography and Security
Finds bad guys in computer learning teams.