MoE-DiffuSeq: Enhancing Long-Document Diffusion Models with Sparse Attention and Mixture of Experts
By: Alexandros Christoforos, Chadbourne Davis
Potential Business Impact:
Writes long stories and code much faster.
We present MoE-DiffuSeq, a mixture of experts based framework for enhancing diffusion models in long document generation. Existing diffusion based text generation models, such as DiffuSeq, suffer from high computational cost and memory overhead when applied to extended sequences. To address these challenges, MoE-DiffuSeq integrates sparse attention with a mixture of experts architecture, enabling efficient and scalable long sequence modeling. Our approach introduces a customized sparse attention mechanism designed to reduce computational complexity while preserving text quality and coherence. In addition, we incorporate a soft absorbing state within the diffusion process to accelerate sequence reconstruction and improve generation precision. Extensive experiments demonstrate that MoE-DiffuSeq significantly improves training efficiency and sampling speed compared to existing diffusion models. These advantages are particularly effective for long document scenarios, including scientific article generation, code repository modeling, and long form dialogue generation. Benchmark results further show that MoE-DiffuSeq improves efficiency, speed, accuracy, and expressiveness, advancing the practical applicability of diffusion models for high quality long form text generation.
Similar Papers
Efficient Training of Diffusion Mixture-of-Experts Models: A Practical Recipe
Machine Learning (CS)
Makes AI image generators work faster and better.
Dynamic Adaptive Shared Experts with Grouped Multi-Head Attention Mixture of Experts
Machine Learning (CS)
Makes computers understand long stories better.
MoMoE: A Mixture of Expert Agent Model for Financial Sentiment Analysis
Computational Engineering, Finance, and Science
Makes AI smarter by letting many AI parts work together.