Score: 1

DGFusion: Dual-guided Fusion for Robust Multi-Modal 3D Object Detection

Published: November 13, 2025 | arXiv ID: 2511.10035v1

By: Feiyang Jia , Caiyan Jia , Ailin Liu and more

Potential Business Impact:

Helps self-driving cars see far-away objects better.

Business Areas:
Image Recognition Data and Analytics, Software

As a critical task in autonomous driving perception systems, 3D object detection is used to identify and track key objects, such as vehicles and pedestrians. However, detecting distant, small, or occluded objects (hard instances) remains a challenge, which directly compromises the safety of autonomous driving systems. We observe that existing multi-modal 3D object detection methods often follow a single-guided paradigm, failing to account for the differences in information density of hard instances between modalities. In this work, we propose DGFusion, based on the Dual-guided paradigm, which fully inherits the advantages of the Point-guide-Image paradigm and integrates the Image-guide-Point paradigm to address the limitations of the single paradigms. The core of DGFusion, the Difficulty-aware Instance Pair Matcher (DIPM), performs instance-level feature matching based on difficulty to generate easy and hard instance pairs, while the Dual-guided Modules exploit the advantages of both pair types to enable effective multi-modal feature fusion. Experimental results demonstrate that our DGFusion outperforms the baseline methods, with respective improvements of +1.0\% mAP, +0.8\% NDS, and +1.3\% average recall on nuScenes. Extensive experiments demonstrate consistent robustness gains for hard instance detection across ego-distance, size, visibility, and small-scale training scenarios.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ πŸ‡²πŸ‡΄ China, Singapore, Macao

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition