Score: 0

Model Parallelism With Subnetwork Data Parallelism

Published: July 11, 2025 | arXiv ID: 2507.09029v1

By: Vaibhav Singh , Zafir Khalid , Edouard Oyallon and more

Potential Business Impact:

Trains big computer brains using less memory.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Distributed pre-training of large models at scale often imposes heavy memory demands on individual nodes and incurs significant intra-node communication costs. We propose a novel alternative approach that reduces the memory requirements by training small, structured subnetworks of the model on separate workers. Unlike pipelining, our method avoids inter-node activation communication and maintains bandwidth requirements that are comparable to or lower than standard data parallel communication schemes based on all-reduce. We evaluate two subnetwork construction strategies guided by the principle of ensuring uniform representation of each parameter across the distributed training setup. Our results show that the stochastic block dropping technique consistently outperforms the width-wise subnetwork construction previously explored in federated learning. We empirically attribute this superior performance to stronger gradient alignment in subnetworks that retain blocks having skip connections. Preliminary experiments highlight the promise of our approach, achieving a 20-40% reduction in memory usage without any loss in performance.

Country of Origin
🇨🇦 Canada

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)