Score: 1

Communication Efficient Split Learning of ViTs with Attention-based Double Compression

Published: September 18, 2025 | arXiv ID: 2509.15058v1

By: Federico Alvetreti , Jary Pomponi , Paolo Di Lorenzo and more

Potential Business Impact:

Makes AI learn faster with less data sent.

Business Areas:
Image Recognition Data and Analytics, Software

This paper proposes a novel communication-efficient Split Learning (SL) framework, named Attention-based Double Compression (ADC), which reduces the communication overhead required for transmitting intermediate Vision Transformers activations during the SL training process. ADC incorporates two parallel compression strategies. The first one merges samples' activations that are similar, based on the average attention score calculated in the last client layer; this strategy is class-agnostic, meaning that it can also merge samples having different classes, without losing generalization ability nor decreasing final results. The second strategy follows the first and discards the least meaningful tokens, further reducing the communication cost. Combining these strategies not only allows for sending less during the forward pass, but also the gradients are naturally compressed, allowing the whole model to be trained without additional tuning or approximations of the gradients. Simulation results demonstrate that Attention-based Double Compression outperforms state-of-the-art SL frameworks by significantly reducing communication overheads while maintaining high accuracy.

Country of Origin
🇮🇹 Italy

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)