MRO: Enhancing Reasoning in Diffusion Language Models via Multi-Reward Optimization
By: Chenglong Wang , Yang Gan , Hang Zhou and more
Potential Business Impact:
Makes AI think better and faster.
Recent advances in diffusion language models (DLMs) have presented a promising alternative to traditional autoregressive large language models (LLMs). However, DLMs still lag behind LLMs in reasoning performance, especially as the number of denoising steps decreases. Our analysis reveals that this shortcoming arises primarily from the independent generation of masked tokens across denoising steps, which fails to capture the token correlation. In this paper, we define two types of token correlation: intra-sequence correlation and inter-sequence correlation, and demonstrate that enhancing these correlations improves reasoning performance. To this end, we propose a Multi-Reward Optimization (MRO) approach, which encourages DLMs to consider the token correlation during the denoising process. More specifically, our MRO approach leverages test-time scaling, reject sampling, and reinforcement learning to directly optimize the token correlation with multiple elaborate rewards. Additionally, we introduce group step and importance sampling strategies to mitigate reward variance and enhance sampling efficiency. Through extensive experiments, we demonstrate that MRO not only improves reasoning performance but also achieves significant sampling speedups while maintaining high performance on reasoning benchmarks.
Similar Papers
Direct Reasoning Optimization: LLMs Can Reward And Refine Their Own Reasoning for Open-Ended Tasks
Computation and Language
Teaches computers to think better for hard problems.
Improving Reasoning for Diffusion Language Models via Group Diffusion Policy Optimization
Machine Learning (CS)
Teaches AI to solve math and code problems better.
MDPO: Overcoming the Training-Inference Divide of Masked Diffusion Language Models
Machine Learning (CS)
Teaches AI to write better by practicing like humans.