DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method
By: Qingwen Zhang , Xiaomeng Zhu , Yushan Zhang and more
Potential Business Impact:
Helps self-driving cars see moving objects better.
Previous dominant methods for scene flow estimation focus mainly on input from two consecutive frames, neglecting valuable information in the temporal domain. While recent trends shift towards multi-frame reasoning, they suffer from rapidly escalating computational costs as the number of frames grows. To leverage temporal information more efficiently, we propose DeltaFlow ($\Delta$Flow), a lightweight 3D framework that captures motion cues via a $\Delta$ scheme, extracting temporal features with minimal computational cost, regardless of the number of frames. Additionally, scene flow estimation faces challenges such as imbalanced object class distributions and motion inconsistency. To tackle these issues, we introduce a Category-Balanced Loss to enhance learning across underrepresented classes and an Instance Consistency Loss to enforce coherent object motion, improving flow accuracy. Extensive evaluations on the Argoverse 2 and Waymo datasets show that $\Delta$Flow achieves state-of-the-art performance with up to 22% lower error and $2\times$ faster inference compared to the next-best multi-frame supervised method, while also demonstrating a strong cross-domain generalization ability. The code is open-sourced at https://github.com/Kin-Zhang/DeltaFlow along with trained model weights.
Similar Papers
DoGFlow: Self-Supervised LiDAR Scene Flow via Cross-Modal Doppler Guidance
CV and Pattern Recognition
Helps cars see moving objects without human help.
ProbDiffFlow: An Efficient Learning-Free Framework for Probabilistic Single-Image Optical Flow Estimation
CV and Pattern Recognition
Shows movement from just one picture.
Flux4D: Flow-based Unsupervised 4D Reconstruction
CV and Pattern Recognition
Builds 3D worlds from videos in seconds.