State Fourier Diffusion Language Model (SFDLM): A Scalable, Novel Iterative Approach to Language Modeling
By: Andrew Kiruluta, Andreas Lemos
Potential Business Impact:
Creates text by fixing jumbled words, no big computers needed.
In recent years, diffusion based methods have emerged as a powerful paradigm for generative modeling. Although discrete diffusion for natural language processing has been explored to a lesser extent, it shows promise for tasks requiring iterative denoising of token based data. In standard approaches to text generation, transformers dominate, but their reliance on self attention often incurs high computational costs. This paper introduces a fully diffusion driven discrete text generation model built without any transformer or large convolution modules. Instead, the model integrates structured state space dynamics in the time domain with a novel Complex Fourier Multi Layer Perceptron module that operates in the frequency domain. The forward noising process randomly samples the vocabulary to replace tokens with a controlled probability, while the learned reverse model systematically reverts corrupted sequences toward their original states. By composing local state space updates with global Fourier based mixing, the approach effectively captures both short and long range dependencies.
Similar Papers
Simple Denoising Diffusion Language Models
Machine Learning (CS)
Makes computers write better stories and sentences.
DSFT: Inspiring Diffusion Large Language Models to Comprehend Mathematical and Logical Patterns
Machine Learning (CS)
Teaches computers math and logic better.
WeFT: Weighted Entropy-driven Fine-Tuning for dLLMs
Computation and Language
Makes AI better at solving puzzles and math.