Discriminately Treating Motion Components Evolves Joint Depth and Ego-Motion Learning
By: Mengtan Zhang , Zizhan Guo , Hongbo Zhao and more
Potential Business Impact:
Helps computers understand 3D movement from videos.
Unsupervised learning of depth and ego-motion, two fundamental 3D perception tasks, has made significant strides in recent years. However, most methods treat ego-motion as an auxiliary task, either mixing all motion types or excluding depth-independent rotational motions in supervision. Such designs limit the incorporation of strong geometric constraints, reducing reliability and robustness under diverse conditions. This study introduces a discriminative treatment of motion components, leveraging the geometric regularities of their respective rigid flows to benefit both depth and ego-motion estimation. Given consecutive video frames, network outputs first align the optical axes and imaging planes of the source and target cameras. Optical flows between frames are transformed through these alignments, and deviations are quantified to impose geometric constraints individually on each ego-motion component, enabling more targeted refinement. These alignments further reformulate the joint learning process into coaxial and coplanar forms, where depth and each translation component can be mutually derived through closed-form geometric relationships, introducing complementary constraints that improve depth robustness. DiMoDE, a general depth and ego-motion joint learning framework incorporating these designs, achieves state-of-the-art performance on multiple public datasets and a newly collected diverse real-world dataset, particularly under challenging conditions. Our source code will be publicly available at mias.group/DiMoDE upon publication.
Similar Papers
E-MoFlow: Learning Egomotion and Optical Flow from Event Data via Implicit Regularization
CV and Pattern Recognition
Helps cameras understand movement without seeing everything.
DisMo: Disentangled Motion Representations for Open-World Motion Transfer
CV and Pattern Recognition
Moves any object's actions to new videos.
UniEgoMotion: A Unified Model for Egocentric Motion Reconstruction, Forecasting, and Generation
CV and Pattern Recognition
Lets computers guess how people will move.