DQ3D: Depth-guided Query for Transformer-Based 3D Object Detection in Traffic Scenarios
By: Ziyu Wang, Wenhao Li, Ji Wu
Potential Business Impact:
Helps cars see hidden objects better.
3D object detection from multi-view images in traffic scenarios has garnered significant attention in recent years. Many existing approaches rely on object queries that are generated from 3D reference points to localize objects. However, a limitation of these methods is that some reference points are often far from the target object, which can lead to false positive detections. In this paper, we propose a depth-guided query generator for 3D object detection (DQ3D) that leverages depth information and 2D detections to ensure that reference points are sampled from the surface or interior of the object. Furthermore, to address partially occluded objects in current frame, we introduce a hybrid attention mechanism that fuses historical detection results with depth-guided queries, thereby forming hybrid queries. Evaluation on the nuScenes dataset demonstrates that our method outperforms the baseline by 6.3\% in terms of mean Average Precision (mAP) and 4.3\% in the NuScenes Detection Score (NDS).
Similar Papers
Real-Time 3D Object Detection with Inference-Aligned Learning
CV and Pattern Recognition
Helps robots see and understand objects in 3D.
Difficulty-Aware Label-Guided Denoising for Monocular 3D Object Detection
CV and Pattern Recognition
Helps cars see better in 3D, even when objects are hidden.
Graph Query Networks for Object Detection with Automotive Radar
CV and Pattern Recognition
Helps cars see better with radar.