Choreographing a World of Dynamic Objects
By: Yanzhe Lyu , Chen Geng , Karthik Dharmarajan and more
Potential Business Impact:
Creates realistic moving objects for movies and robots.
Dynamic objects in our physical 4D (3D + time) world are constantly evolving, deforming, and interacting with other objects, leading to diverse 4D scene dynamics. In this paper, we present a universal generative pipeline, CHORD, for CHOReographing Dynamic objects and scenes and synthesizing this type of phenomena. Traditional rule-based graphics pipelines to create these dynamics are based on category-specific heuristics, yet are labor-intensive and not scalable. Recent learning-based methods typically demand large-scale datasets, which may not cover all object categories in interest. Our approach instead inherits the universality from the video generative models by proposing a distillation-based pipeline to extract the rich Lagrangian motion information hidden in the Eulerian representations of 2D videos. Our method is universal, versatile, and category-agnostic. We demonstrate its effectiveness by conducting experiments to generate a diverse range of multi-body 4D dynamics, show its advantage compared to existing methods, and demonstrate its applicability in generating robotics manipulation policies. Project page: https://yanzhelyu.github.io/chord
Similar Papers
VerseCrafter: Dynamic Realistic Video World Model with 4D Geometric Control
CV and Pattern Recognition
Creates realistic videos with controllable objects and cameras.
Inferring Compositional 4D Scenes without Ever Seeing One
CV and Pattern Recognition
Builds 3D worlds from videos, showing moving objects.
LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences
CV and Pattern Recognition
Makes self-driving cars "see" and move better.