C$^2$DLM: Causal Concept-Guided Diffusion Large Language Models
By: Kairong Han , Nuanqiao Shan , Ziyu Zhao and more
Potential Business Impact:
Teaches computers to think step-by-step like people.
Autoregressive (AR) language models and Diffusion Language Models (DLMs) constitute the two principal paradigms of large language models. However, both paradigms suffer from insufficient reasoning capabilities. Human reasoning inherently relies on causal knowledge and thought, which are reflected in natural language. But in the AR paradigm, language is modeled as next token prediction (a strictly left-to-right, token-by-token order), whereas natural language itself exhibits more flexible causal structures. In the DLM paradigm, the attention mechanism is fully connected, which entirely disregards causal order. To fill this gap, we propose a \underline{\textbf{C}}ausal \underline{\textbf{C}}oncept-Guided \underline{\textbf{D}}iffusion \underline{\textbf{L}}anguage \underline{\textbf{M}}odel (C$^2$DLM). Starting from DLM's fully connected attention, C$^2$DLM first obtains a concept-level causal graph from the teacher model, and then explicitly guides attention to learn causal relationships between concepts. By focusing on causal relationships and avoiding interference from difficult subgoals involving causal inversion, C$^2$DLM improves 12\% with about 3.2 times training speedup in the COT-OrderPerturb task, and achieves an average gain of 1.31\% across six downstream reasoning tasks. More details in the repository ~\href{https://github.com/Kairong-Han/C-2-DLM}{here}.
Similar Papers
A Survey on Diffusion Language Models
Computation and Language
Makes computers write faster and understand better.
Efficient-DLM: From Autoregressive to Diffusion Language Models, and Beyond in Speed
Computation and Language
Makes AI write faster without losing quality.
Diffusion Language Models are Super Data Learners
Machine Learning (CS)
Makes AI better at writing code with less data.