StereoDETR: Stereo-based Transformer for 3D Object Detection
By: Shiyi Mu , Zichong Gu , Zhiqi Ai and more
Potential Business Impact:
Helps cars see better and faster in 3D.
Compared to monocular 3D object detection, stereo-based 3D methods offer significantly higher accuracy but still suffer from high computational overhead and latency. The state-of-the-art stereo 3D detection method achieves twice the accuracy of monocular approaches, yet its inference speed is only half as fast. In this paper, we propose StereoDETR, an efficient stereo 3D object detection framework based on DETR. StereoDETR consists of two branches: a monocular DETR branch and a stereo branch. The DETR branch is built upon 2D DETR with additional channels for predicting object scale, orientation, and sampling points. The stereo branch leverages low-cost multi-scale disparity features to predict object-level depth maps. These two branches are coupled solely through a differentiable depth sampling strategy. To handle occlusion, we introduce a constrained supervision strategy for sampling points without requiring extra annotations. StereoDETR achieves real-time inference and is the first stereo-based method to surpass monocular approaches in speed. It also achieves competitive accuracy on the public KITTI benchmark, setting new state-of-the-art results on pedestrian and cyclist subsets. The code is available at https://github.com/shiyi-mu/StereoDETR-OPEN.
Similar Papers
StereoMV2D: A Sparse Temporal Stereo-Enhanced Framework for Robust Multi-View 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see farther and better.
Unleashing the Temporal Potential of Stereo Event Cameras for Continuous-Time 3D Object Detection
CV and Pattern Recognition
Lets self-driving cars see moving objects better.
Stereo-based 3D Anomaly Object Detection for Autonomous Driving: A New Dataset and Baseline
CV and Pattern Recognition
Helps self-driving cars spot unusual road objects.