View-Consistent Diffusion Representations for 3D-Consistent Video Generation
By: Duolikun Danier , Ge Gao , Steven McDonagh and more
Potential Business Impact:
Makes computer-made videos look more real.
Video generation models have made significant progress in generating realistic content, enabling applications in simulation, gaming, and film making. However, current generated videos still contain visual artifacts arising from 3D inconsistencies, e.g., objects and structures deforming under changes in camera pose, which can undermine user experience and simulation fidelity. Motivated by recent findings on representation alignment for diffusion models, we hypothesize that improving the multi-view consistency of video diffusion representations will yield more 3D-consistent video generation. Through detailed analysis on multiple recent camera-controlled video diffusion models we reveal strong correlations between 3D-consistent representations and videos. We also propose ViCoDR, a new approach for improving the 3D consistency of video models by learning multi-view consistent diffusion representations. We evaluate ViCoDR on camera controlled image-to-video, text-to-video, and multi-view generation models, demonstrating significant improvements in the 3D consistency of the generated videos. Project page: https://danier97.github.io/ViCoDR.
Similar Papers
DisCo3D: Distilling Multi-View Consistency for 3D Scene Editing
CV and Pattern Recognition
Changes 3D objects in pictures perfectly.
3D-Consistent Multi-View Editing by Diffusion Guidance
CV and Pattern Recognition
Makes 3D pictures look right after editing.
Fast Multi-view Consistent 3D Editing with Video Priors
CV and Pattern Recognition
Changes 3D objects with simple text commands.