Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces
By: Kevin Rojas , Yuchen Zhu , Sichen Zhu and more
Potential Business Impact:
Lets computers create matching pictures and words.
Diffusion models have demonstrated remarkable performance in generating unimodal data across various tasks, including image, video, and text generation. On the contrary, the joint generation of multimodal data through diffusion models is still in the early stages of exploration. Existing approaches heavily rely on external preprocessing protocols, such as tokenizers and variational autoencoders, to harmonize varied data representations into a unified, unimodal format. This process heavily demands the high accuracy of encoders and decoders, which can be problematic for applications with limited data. To lift this restriction, we propose a novel framework for building multimodal diffusion models on arbitrary state spaces, enabling native generation of coupled data across different modalities. By introducing an innovative decoupled noise schedule for each modality, we enable both unconditional and modality-conditioned generation within a single model simultaneously. We empirically validate our approach for text-image generation and mixed-type tabular data synthesis, demonstrating that it achieves competitive performance.
Similar Papers
Unified Multimodal Discrete Diffusion
CV and Pattern Recognition
Creates pictures and stories together, better than before.
TransDiffuser: Diverse Trajectory Generation with Decorrelated Multi-modal Representation for End-to-end Autonomous Driving
Robotics
Helps self-driving cars plan safer, varied routes.
Controllable Motion Generation via Diffusion Modal Coupling
Robotics
Robots can now choose the best way to move.