Pigeon-SL: Robust Split Learning Framework for Edge Intelligence under Malicious Clients
By: Sangjun Park, Tony Q. S. Quek, Hyowoon Seo
Potential Business Impact:
Keeps AI learning safe from bad data.
Recent advances in split learning (SL) have established it as a promising framework for privacy-preserving, communication-efficient distributed learning at the network edge. However, SL's sequential update process is vulnerable to even a single malicious client, which can significantly degrade model accuracy. To address this, we introduce Pigeon-SL, a novel scheme grounded in the pigeonhole principle that guarantees at least one entirely honest cluster among M clients, even when up to N of them are adversarial. In each global round, the access point partitions the clients into N+1 clusters, trains each cluster independently via vanilla SL, and evaluates their validation losses on a shared dataset. Only the cluster with the lowest loss advances, thereby isolating and discarding malicious updates. We further enhance training and communication efficiency with Pigeon-SL+, which repeats training on the selected cluster to match the update throughput of standard SL. We validate the robustness and effectiveness of our approach under three representative attack models -- label flipping, activation and gradient manipulation -- demonstrating significant improvements in accuracy and resilience over baseline SL methods in future intelligent wireless networks.
Similar Papers
P3SL: Personalized Privacy-Preserving Split Learning on Heterogeneous Edge Devices
Machine Learning (CS)
Lets phones learn without sharing private info.
Oops!... They Stole it Again: Attacks on Split Learning
Machine Learning (CS)
Keeps your private data safe during learning.
A Taxonomy of Attacks and Defenses in Split Learning
Cryptography and Security
Protects private data when computers share learning.