Vision Bridge Transformer at Scale
By: Zhenxiong Tan , Zeqing Wang , Xingyi Yang and more
Potential Business Impact:
Edits pictures and videos with simple instructions.
We introduce Vision Bridge Transformer (ViBT), a large-scale instantiation of Brownian Bridge Models designed for conditional generation. Unlike traditional diffusion models that transform noise into data, Bridge Models directly model the trajectory between inputs and outputs, creating an efficient data-to-data translation paradigm. By scaling these models to 20B and 1.3B parameters, we demonstrate their effectiveness for image and video translation tasks. To support this scale, we adopt a Transformer architecture and propose a variance-stabilized velocity-matching objective for robust training. Together, these advances highlight the power of scaling Bridge Models for instruction-based image editing and complex video translation.
Similar Papers
Time-Correlated Video Bridge Matching
Machine Learning (CS)
Makes videos look smoother and more real.
Deeper Inside Deep ViT
CV and Pattern Recognition
Makes computers create better pictures from ideas.
Versatile Transition Generation with Image-to-Video Diffusion
CV and Pattern Recognition
Creates smooth video transitions between scenes.