Score: 1

Communication-Efficient Serving for Video Diffusion Models with Latent Parallelism

Published: December 8, 2025 | arXiv ID: 2512.07350v1

By: Zhiyuan Wu , Shuai Wang , Li Chen and more

Potential Business Impact:

Makes AI create videos faster and cheaper.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

Video diffusion models (VDMs) perform attention computation over the 3D spatio-temporal domain. Compared to large language models (LLMs) processing 1D sequences, their memory consumption scales cubically, necessitating parallel serving across multiple GPUs. Traditional parallelism strategies partition the computational graph, requiring frequent high-dimensional activation transfers that create severe communication bottlenecks. To tackle this issue, we exploit the local spatio-temporal dependencies inherent in the diffusion denoising process and propose Latent Parallelism (LP), the first parallelism strategy tailored for VDM serving. \textcolor{black}{LP decomposes the global denoising problem into parallelizable sub-problems by dynamically rotating the partitioning dimensions (temporal, height, and width) within the compact latent space across diffusion timesteps, substantially reducing the communication overhead compared to prevailing parallelism strategies.} To ensure generation quality, we design a patch-aligned overlapping partition strategy that matches partition boundaries with visual patches and a position-aware latent reconstruction mechanism for smooth stitching. Experiments on three benchmarks demonstrate that LP reduces communication overhead by up to 97\% over baseline methods while maintaining comparable generation quality. As a non-intrusive plug-in paradigm, LP can be seamlessly integrated with existing parallelism strategies, enabling efficient and scalable video generation services.

Page Count
19 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing