Communication Efficient Split Learning of ViTs with Attention-based Double Compression
By: Federico Alvetreti , Jary Pomponi , Paolo Di Lorenzo and more
Potential Business Impact:
Makes AI learn faster with less data sent.
This paper proposes a novel communication-efficient Split Learning (SL) framework, named Attention-based Double Compression (ADC), which reduces the communication overhead required for transmitting intermediate Vision Transformers activations during the SL training process. ADC incorporates two parallel compression strategies. The first one merges samples' activations that are similar, based on the average attention score calculated in the last client layer; this strategy is class-agnostic, meaning that it can also merge samples having different classes, without losing generalization ability nor decreasing final results. The second strategy follows the first and discards the least meaningful tokens, further reducing the communication cost. Combining these strategies not only allows for sending less during the forward pass, but also the gradients are naturally compressed, allowing the whole model to be trained without additional tuning or approximations of the gradients. Simulation results demonstrate that Attention-based Double Compression outperforms state-of-the-art SL frameworks by significantly reducing communication overheads while maintaining high accuracy.
Similar Papers
SL-ACC: A Communication-Efficient Split Learning Framework with Adaptive Channel-wise Compression
Machine Learning (CS)
Makes AI learn faster on many devices.
Attention and Compression is all you need for Controllably Efficient Language Models
Machine Learning (CS)
Lets computers remember more with less effort.
Adaptive Token Merging for Efficient Transformer Semantic Communication at the Edge
Machine Learning (CS)
Makes smart computer programs run faster, cheaper.