Decentralized Autoregressive Generation
By: Stepan Maschan, Haoxuan Qu, Jun Liu
Potential Business Impact:
Makes AI models learn faster and better together.
We present a theoretical analysis of decentralization of autoregressive generation. We define the Decentralized Discrete Flow Matching objective, by expressing probability generating velocity as a linear combination of expert flows. We also conduct experiments demonstrating the equivalence between decentralized and centralized training settings for multimodal language models across diverse set of benchmarks. Specifically, we compare two distinct paradigms: LLaVA and InternVL 2.5-1B, which uses a fixed CLIP vision encoder and performs full-parameter fine-tuning (ViT+MLP+LLM) during the instruction tuning stage.
Similar Papers
A solvable model of learning generative diffusion: theory and insights
Machine Learning (CS)
Teaches computers to create realistic pictures.
Latent-Autoregressive GP-VAE Language Model
Machine Learning (CS)
Lets computers write stories by understanding time.
Masked Auto-Regressive Variational Acceleration: Fast Inference Makes Practical Reinforcement Learning
Machine Learning (CS)
Makes AI create pictures much faster and better.