Mosaic: Unlocking Long-Context Inference for Diffusion LLMs via Global Memory Planning and Dynamic Peak Taming
By: Liang Zheng , Bowen Shi , Yitao Hu and more
Potential Business Impact:
Makes AI understand long stories with less computer memory.
Diffusion-based large language models (dLLMs) have emerged as a promising paradigm, utilizing simultaneous denoising to enable global planning and iterative refinement. While these capabilities are particularly advantageous for long-context generation, deploying such models faces a prohibitive memory capacity barrier stemming from severe system inefficiencies. We identify that existing inference systems are ill-suited for this paradigm: unlike autoregressive models constrained by the cumulative KV-cache, dLLMs are bottlenecked by transient activations recomputed at every step. Furthermore, general-purpose memory reuse mechanisms lack the global visibility to adapt to dLLMs' dynamic memory peaks, which toggle between logits and FFNs. To address these mismatches, we propose Mosaic, a memory-efficient inference system that shifts from local, static management to a global, dynamic paradigm. Mosaic integrates a mask-only logits kernel to eliminate redundancy, a lazy chunking optimizer driven by an online heuristic search to adaptively mitigate dynamic peaks, and a global memory manager to resolve fragmentation via virtual addressing. Extensive evaluations demonstrate that Mosaic achieves an average 2.71$\times$ reduction in the memory peak-to-average ratio and increases the maximum inference sequence length supportable on identical hardware by 15.89-32.98$\times$. This scalability is achieved without compromising accuracy and speed, and in fact reducing latency by 4.12%-23.26%.
Similar Papers
Sparse-dLLM: Accelerating Diffusion LLMs with Dynamic Cache Eviction
Computation and Language
Makes AI models remember more without using more memory.
Taming the Memory Footprint Crisis: System Design for Production Diffusion LLM Serving
Distributed, Parallel, and Cluster Computing
Makes AI image creation faster and cheaper.
dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching
Machine Learning (CS)
Makes AI text generators work much faster.