Score: 0

Communication-Computation Pipeline Parallel Split Learning over Wireless Edge Networks

Published: November 28, 2025 | arXiv ID: 2511.23167v1

By: Chenyu Liu , Zhaoyang Zhang , Zirui Chen and more

Potential Business Impact:

Speeds up AI learning by sharing tasks smartly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Split learning (SL) offloads main computing tasks from multiple resource-constrained user equippments (UEs) to the base station (BS), while preserving local data privacy. However, its computation and communication processes remain sequential, resulting in limited system efficiency. To overcome this limitation, this paper applies pipeline parallelism (PP) of distributed training to SL in wireless networks, proposing the so-called communication-computation pipeline parallel split learning (C$^2$P$^2$SL). By considering the communicating and computing processes of UEs and BS as an overall pipeline, C$^2$P$^2$SL achieves pipeline parallelization among different micro-batches which are split from each batch of data samples. The overlap of communication and computation in this way significantly reduces the total training time. Given that training efficiency is affected by position of cutting layer and heterogeneity of the UEs, we formulate a joint optimization problem of task split and resource allocation, and design a solution based on alternating optimization. Experimental results demonstrate that C$^2$P$^2$SL significantly reduces system training time by over 38\% while maintaining convergence accuracy under different communication conditions.

Page Count
6 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing