Score: 2

Seed Diffusion: A Large-Scale Diffusion Language Model with High-Speed Inference

Published: August 4, 2025 | arXiv ID: 2508.02193v1

By: Yuxuan Song , Zheng Zhang , Cheng Luo and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Generates code super fast with top quality.

We present Seed Diffusion Preview, a large-scale language model based on discrete-state diffusion, offering remarkably fast inference speed. Thanks to non-sequential, parallel generation, discrete diffusion models provide a notable speedup to mitigate the inherent latency of token-by-token decoding, as demonstrated recently (e.g., Mercury Coder, Gemini Diffusion). Seed Diffusion Preview achieves an inference speed of 2,146 token/s over H20 GPUs while maintaining competitive performance across a sweep of standard code evaluation benchmarks, significantly faster than contemporary Mercury and Gemini Diffusion, establishing new state of the art on the speed-quality Pareto frontier for code models.

Country of Origin
🇨🇳 China

Page Count
11 pages

Category
Computer Science:
Computation and Language