ACE-Sync: An Adaptive Cloud-Edge Synchronization Framework for Communication-Efficient Large-Scale Distributed Model Training
By: Yi Yang, Ziyu Lin, Liesheng Wei
Large-scale deep learning models impose substantial communication overh ead in distributed training, particularly in bandwidth-constrained or heterogeneous clo ud-edge environments. Conventional synchronous or fixed-compression techniques o ften struggle to balance communication cost, convergence stability, and model accura cy. To address these challenges, we propose ACE-Sync, an Adaptive Cloud-Edge Sy nchronization Framework that integrates (1) an attention-based gradient importance p redictor, (2) a differentiated parameter compression strategy, and (3) a hierarchical cl oud-edge coordination mechanism. ACE-Sync dynamically selects which parameter groups to synchronize and determines appropriate compression levels under per-devic e bandwidth budgets. A knapsack-based optimization strategy is adopted to maximize important gradient preservation while reducing redundant communication. Furthermo re, residual-based error compensation and device clustering ensure long-term converg ence and cross-device personalization. Experiments show that ACE-Sync substantiall y reduces communication overhead while maintaining competitive accuracy. Compar ed with FullSync, ACE-Sync lowers communication cost from 112.5 GB to 44.7 GB (a 60% reduction) and shortens convergence from 41 to 39 epochs. Despite aggressiv e communication reduction, ACE-Sync preserves high model quality, achieving 82. 1% Top-1 accuracy-only 0.3% below the full-synchronization baseline-demonstrating its efficiency and scalability for large-scale distributed training. These results indicate that ACE-Sync provides a scalable, communication-efficient, and accuracy-preservin g solution for large-scale cloud-edge distributed model training.
Similar Papers
ECCENTRIC: Edge-Cloud Collaboration Framework for Distributed Inference Using Knowledge Adaptation
Distributed, Parallel, and Cluster Computing
Makes smart devices faster and use less power.
Edge-Cloud Collaborative Computing on Distributed Intelligence and Model Optimization: A Survey
Distributed, Parallel, and Cluster Computing
Makes smart apps run faster on phones and clouds.
SL-ACC: A Communication-Efficient Split Learning Framework with Adaptive Channel-wise Compression
Machine Learning (CS)
Makes AI learn faster on many devices.