ObjectForesight: Predicting Future 3D Object Trajectories from Human Videos
By: Rustin Soraki , Homanga Bharadhwaj , Ali Farhadi and more
Potential Business Impact:
Lets computers guess how objects will move.
Humans can effortlessly anticipate how objects might move or change through interaction--imagining a cup being lifted, a knife slicing, or a lid being closed. We aim to endow computational systems with a similar ability to predict plausible future object motions directly from passive visual observation. We introduce ObjectForesight, a 3D object-centric dynamics model that predicts future 6-DoF poses and trajectories of rigid objects from short egocentric video sequences. Unlike conventional world or dynamics models that operate in pixel or latent space, ObjectForesight represents the world explicitly in 3D at the object level, enabling geometrically grounded and temporally coherent predictions that capture object affordances and trajectories. To train such a model at scale, we leverage recent advances in segmentation, mesh reconstruction, and 3D pose estimation to curate a dataset of 2 million plus short clips with pseudo-ground-truth 3D object trajectories. Through extensive experiments, we show that ObjectForesight achieves significant gains in accuracy, geometric consistency, and generalization to unseen objects and scenes, establishing a scalable framework for learning physically grounded, object-centric dynamics models directly from observation. objectforesight.github.io
Similar Papers
ForeSight: Multi-View Streaming Joint Object Detection and Trajectory Forecasting
CV and Pattern Recognition
Helps self-driving cars predict where things will go.
Flowing from Reasoning to Motion: Learning 3D Hand Trajectory Prediction from Egocentric Human Interaction Videos
CV and Pattern Recognition
Helps robots predict hand movements by watching.
LookOut: Real-World Humanoid Egocentric Navigation
CV and Pattern Recognition
Helps robots and computers understand where you're looking.