SL-ACC: A Communication-Efficient Split Learning Framework with Adaptive Channel-wise Compression
By: Zehang Lin , Zheng Lin , Miao Yang and more
Potential Business Impact:
Makes AI learn faster on many devices.
The increasing complexity of neural networks poses a significant barrier to the deployment of distributed machine learning (ML) on resource-constrained devices, such as federated learning (FL). Split learning (SL) offers a promising solution by offloading the primary computing load from edge devices to a server via model partitioning. However, as the number of participating devices increases, the transmission of excessive smashed data (i.e., activations and gradients) becomes a major bottleneck for SL, slowing down the model training. To tackle this challenge, we propose a communication-efficient SL framework, named SL-ACC, which comprises two key components: adaptive channel importance identification (ACII) and channel grouping compression (CGC). ACII first identifies the contribution of each channel in the smashed data to model training using Shannon entropy. Following this, CGC groups the channels based on their entropy and performs group-wise adaptive compression to shrink the transmission volume without compromising training accuracy. Extensive experiments across various datasets validate that our proposed SL-ACC framework takes considerably less time to achieve a target accuracy than state-of-the-art benchmarks.
Similar Papers
Communication Efficient Split Learning of ViTs with Attention-based Double Compression
Machine Learning (CS)
Makes AI learn faster with less data sent.
Communication-and-Computation Efficient Split Federated Learning: Gradient Aggregation and Resource Management
Distributed, Parallel, and Cluster Computing
Makes AI learn faster with less data sent.
Communication-Computation Pipeline Parallel Split Learning over Wireless Edge Networks
Distributed, Parallel, and Cluster Computing
Speeds up AI learning by sharing tasks smartly.