Score: 1

ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding

Published: December 15, 2025 | arXiv ID: 2512.13586v1

By: Jia-Nan Li , Jian Guan , Wei Wu and more

Potential Business Impact:

Makes AI create text much faster and better.

Business Areas:
Autonomous Vehicles Transportation

Autoregressive models (ARMs) are hindered by slow sequential inference. While masked diffusion models (MDMs) offer a parallel alternative, they suffer from critical drawbacks: high computational overhead from precluding Key-Value (KV) caching, and incoherent generation arising from learning dependencies over an intractable space of token combinations. To address these limitations, we introduce ReFusion, a novel masked diffusion model that achieves superior performance and efficiency by elevating parallel decoding from the token level to a higher slot level, where each slot is a fixed-length, contiguous sub-sequence. This is achieved through an iterative ``plan-and-infill'' decoding process: a diffusion-based planning step first identifies a set of weakly dependent slots, and an autoregressive infilling step then decodes these selected slots in parallel. The slot-based design simultaneously unlocks full KV cache reuse with a unified causal framework and reduces the learning complexity from the token combination space to a manageable slot-level permutation space. Extensive experiments on seven diverse benchmarks show that ReFusion not only overwhelmingly surpasses prior MDMs with 34% performance gains and an over 18$\times$ speedup on average, but also bridges the performance gap to strong ARMs while maintaining a 2.33$\times$ average speedup.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
Computation and Language