DiffER: Diffusion Entity-Relation Modeling for Reversal Curse in Diffusion Large Language Models
By: Shaokai He , Kaiwen Wei , Xinyi Zeng and more
The "reversal curse" refers to the phenomenon where large language models (LLMs) exhibit predominantly unidirectional behavior when processing logically bidirectional relationships. Prior work attributed this to autoregressive training -- predicting the next token inherently favors left-to-right information flow over genuine bidirectional knowledge associations. However, we observe that Diffusion LLMs (DLLMs), despite being trained bidirectionally, also suffer from the reversal curse. To investigate the root causes, we conduct systematic experiments on DLLMs and identify three key reasons: 1) entity fragmentation during training, 2) data asymmetry, and 3) missing entity relations. Motivated by the analysis of these reasons, we propose Diffusion Entity-Relation Modeling (DiffER), which addresses the reversal curse through entity-aware training and balanced data construction. Specifically, DiffER introduces whole-entity masking, which mitigates entity fragmentation by predicting complete entities in a single step. DiffER further employs distribution-symmetric and relation-enhanced data construction strategies to alleviate data asymmetry and missing relations. Extensive experiments demonstrate that DiffER effectively alleviates the reversal curse in Diffusion LLMs, offering new perspectives for future research.
Similar Papers
WeDLM: Reconciling Diffusion Language Models with Standard Causal Attention for Fast Inference
Computation and Language
Makes AI write much faster by changing how it thinks.
C$^2$DLM: Causal Concept-Guided Diffusion Large Language Models
Computation and Language
Teaches computers to think step-by-step like people.
Exploring the Power of Diffusion Large Language Models for Software Engineering: An Empirical Investigation
Software Engineering
Makes computers write and fix code faster.