Score: 1

Pigeon-SL: Robust Split Learning Framework for Edge Intelligence under Malicious Clients

Published: August 4, 2025 | arXiv ID: 2508.02235v1

By: Sangjun Park, Tony Q. S. Quek, Hyowoon Seo

Potential Business Impact:

Keeps AI learning safe from bad data.

Recent advances in split learning (SL) have established it as a promising framework for privacy-preserving, communication-efficient distributed learning at the network edge. However, SL's sequential update process is vulnerable to even a single malicious client, which can significantly degrade model accuracy. To address this, we introduce Pigeon-SL, a novel scheme grounded in the pigeonhole principle that guarantees at least one entirely honest cluster among M clients, even when up to N of them are adversarial. In each global round, the access point partitions the clients into N+1 clusters, trains each cluster independently via vanilla SL, and evaluates their validation losses on a shared dataset. Only the cluster with the lowest loss advances, thereby isolating and discarding malicious updates. We further enhance training and communication efficiency with Pigeon-SL+, which repeats training on the selected cluster to match the update throughput of standard SL. We validate the robustness and effectiveness of our approach under three representative attack models -- label flipping, activation and gradient manipulation -- demonstrating significant improvements in accuracy and resilience over baseline SL methods in future intelligent wireless networks.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡°πŸ‡· Korea, Republic of, Singapore

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)