Score: 0

WeFT: Weighted Entropy-driven Fine-Tuning for dLLMs

Published: September 25, 2025 | arXiv ID: 2509.20863v1

By: Guowei Xu , Wenxin Xu , Jiawang Zhao and more

Potential Business Impact:

Makes AI better at solving puzzles and math.

Business Areas:
Text Analytics Data and Analytics, Software

Diffusion models have recently shown strong potential in language modeling, offering faster generation compared to traditional autoregressive approaches. However, applying supervised fine-tuning (SFT) to diffusion models remains challenging, as they lack precise probability estimates at each denoising step. While the diffusion mechanism enables the model to reason over entire sequences, it also makes the generation process less predictable and often inconsistent. This highlights the importance of controlling key tokens that guide the direction of generation. To address this issue, we propose WeFT, a weighted SFT method for diffusion language models, where tokens are assigned different weights based on their entropy. Derived from diffusion theory, WeFT delivers substantial gains: training on s1K, s1K-1.1, and 3k samples from open-r1, it achieves relative improvements of 39%, 64%, and 83% over standard SFT on four widely used reasoning benchmarks (Sudoku, Countdown, GSM8K, and MATH-500). The code and models will be made publicly available.

Country of Origin
🇨🇳 China

Page Count
17 pages

Category
Computer Science:
Computation and Language