Score: 1

Reasoning in Diffusion Large Language Models is Concentrated in Dynamic Confusion Zones

Published: November 19, 2025 | arXiv ID: 2511.15208v1

By: Ranfei Chen, Ming Chen, Kaifei Wang

Potential Business Impact:

Teaches AI to learn better by focusing on tricky parts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Diffusion Large Language Models (dLLMs) are rapidly emerging alongside autoregressive models as a powerful paradigm for complex reasoning, with reinforcement learning increasingly used for downstream alignment. Existing trajectory-based RL methods uniformly allocate policy gradients across denoising steps, implicitly treating all steps as equally important. We challenge this assumption by analyzing trajectories with several step-level metrics: entropy-based uncertainty, Confidence-Margin (CM) uncertainty, and Rate of Entropy Change (RoEC). These reveal structured "zones of confusion": transient spikes in uncertainty and instability that strongly predict final success or failure, while most steps remain stable. We propose Adaptive Trajectory Policy Optimization (ATPO), a lightweight step-selection strategy that dynamically reallocates gradient updates to these high-leverage steps without changing the RL objective, rewards, or compute budget. Using a hybrid RoEC+CM rule, ATPO delivers substantial gains in reasoning accuracy and training stability across benchmarks, showing that exploiting trajectory dynamics is key to advancing dLLM RL.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)