DiffusionBrowser: Interactive Diffusion Previews via Multi-Branch Decoders
By: Susung Hong , Chongjian Ge , Zhifei Zhang and more
Potential Business Impact:
Lets you see videos as they are made.
Video diffusion models have revolutionized generative video synthesis, but they are imprecise, slow, and can be opaque during generation -- keeping users in the dark for a prolonged period. In this work, we propose DiffusionBrowser, a model-agnostic, lightweight decoder framework that allows users to interactively generate previews at any point (timestep or transformer block) during the denoising process. Our model can generate multi-modal preview representations that include RGB and scene intrinsics at more than 4$\times$ real-time speed (less than 1 second for a 4-second video) that convey consistent appearance and motion to the final video. With the trained decoder, we show that it is possible to interactively guide the generation at intermediate noise steps via stochasticity reinjection and modal steering, unlocking a new control capability. Moreover, we systematically probe the model using the learned decoders, revealing how scene, object, and other details are composed and assembled during the otherwise black-box denoising process.
Similar Papers
StreamDiffusionV2: A Streaming System for Dynamic and Interactive Video Generation
CV and Pattern Recognition
Makes live videos change instantly as you create them.
TransDiffuser: Diverse Trajectory Generation with Decorrelated Multi-modal Representation for End-to-end Autonomous Driving
Robotics
Helps self-driving cars plan safer, varied routes.
Bitrate-Controlled Diffusion for Disentangling Motion and Content in Video
CV and Pattern Recognition
Separates video's movement from its pictures.