Self-Supervised Learning of Motion Concepts by Optimizing Counterfactuals
By: Stefan Stojanov , David Wendt , Seungwoo Kim and more
Potential Business Impact:
Teaches computers to see how things move.
Estimating motion in videos is an essential computer vision problem with many downstream applications, including controllable video generation and robotics. Current solutions are primarily trained using synthetic data or require tuning of situation-specific heuristics, which inherently limits these models' capabilities in real-world contexts. Despite recent developments in large-scale self-supervised learning from videos, leveraging such representations for motion estimation remains relatively underexplored. In this work, we develop Opt-CWM, a self-supervised technique for flow and occlusion estimation from a pre-trained next-frame prediction model. Opt-CWM works by learning to optimize counterfactual probes that extract motion information from a base video model, avoiding the need for fixed heuristics while training on unrestricted video inputs. We achieve state-of-the-art performance for motion estimation on real-world videos while requiring no labeled data.
Similar Papers
Leveraging Motion Information for Better Self-Supervised Video Correspondence Learning
CV and Pattern Recognition
Helps computers track moving things in videos.
Causally Steered Diffusion for Automated Video Counterfactual Generation
CV and Pattern Recognition
Makes videos show realistic "what if" changes.
MotionFlow:Learning Implicit Motion Flow for Complex Camera Trajectory Control in Video Generation
CV and Pattern Recognition
Makes videos follow camera moves perfectly.